Overview of Cilium Network Policy covering L3 L4 and L7 rules, selector semantics, port and CIDR handling, service and FQDN matching, entities, deny rules, and cluster wide policies
In this lesson we’ll cover Cilium Network Policies from basic L3/L4 (network and transport) rules—similar to Kubernetes NetworkPolicy—to advanced L7 (application) features that make Cilium policies more powerful and flexible. You’ll learn selector semantics, port and CIDR handling, service and FQDN matching, entities (host, world, kube-apiserver), deny rules, and cluster-wide policies.Quick reference table
Concept
Use case
Example
Namespace-scoped endpoint selector
Restrict traffic to pods within the same namespace
endpointSelector.matchLabels
Allow/deny lists
Permit or block specific ingress/egress
ingress: [] (deny all) or ingress: - {} (allow all sources)
L4 (ports)
Restrict by port or port range
toPorts.ports.port and endPort
L7 (HTTP/DNS/FQDN)
Application-level controls (methods, paths, DNS names)
A CiliumNetworkPolicy specifies apiVersion, kind, metadata (including namespace) and a spec that selects the endpoints (pods) to which the policy applies. Cilium uses endpointSelector to select endpoints. By default, policies are namespace-scoped: a policy created in namespace dev matches endpoints in dev unless you explicitly reference other namespaces.Example — allow ingress to backend pods from frontend pods in the same namespace (dev):
Cilium’s endpointSelector is namespace-aware by default. If you need cross-namespace matching, include explicit namespace labels in matchLabels or use a cluster-wide policy.
Allow all endpoints in the same namespace (empty selector in fromEndpoints permits any source in the policy’s namespace):
To explicitly deny all ingress/egress for selected endpoints use empty lists (ingress: [] and egress: []). In contrast, a rule like ingress: - {} allows ingress from any source because it represents a rule entry with no source constraints.
Match services instead of direct pods
You can match traffic via service references: k8sService (serviceName + namespace) or k8sServiceSelector (label selector + namespace). This implicitly matches the pods behind the service and can reference services across namespaces.
Copy
apiVersion: "cilium.io/v2"kind: CiliumNetworkPolicymetadata: name: "l3-rule-to-services" namespace: devspec: endpointSelector: matchLabels: role: frontend egress: - toServices: # Services can be referenced by name + namespace - k8sService: serviceName: backend-service namespace: dev # Or by selector + namespace - k8sServiceSelector: selector: matchLabels: app: app1 namespace: dev
CIDR and CIDR sets (with exceptions)
Use toCIDR and toCIDRSet to allow traffic to IPv4/IPv6 ranges and to exclude subranges with the except field.
You can target ports without specifying L3 selectors. If a rule matches only on ports, it applies to traffic targeting any endpoint (subject to the policy’s namespace scoping).
Allow egress on TCP port 80 to any pod (in the policy’s namespace):
Cilium supports application-layer controls including HTTP, DNS, and FQDN-based egress. These L7 filters require L4 match criteria (port/protocol) to scope inspection.HTTP filtering example — only allow GET /products from frontend to backend on port 80:
When restricting egress, ensure your pods can still reach DNS (kube-dns) or name resolution will break. Cilium can inspect DNS L7 payloads and also support toFQDNs so Cilium tracks DNS responses and allows the resolved IP addresses at runtime.
Example — allow DNS queries only to kube-dns and permit queries for google.com and subdomains; also allow HTTP to those FQDNs on TCP/80:
Copy
apiVersion: "cilium.io/v2"kind: CiliumNetworkPolicymetadata: name: "dns-and-fqdn" namespace: devspec: endpointSelector: matchLabels: role: backend egress: # Allow queries to kube-dns (namespace and label keys quoted to avoid YAML key ambiguity) - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system "k8s:k8s-app": kube-dns toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchName: "google.com" - matchPattern: "*.google.com" # Allow data egress to FQDNs resolved from those DNS responses - toFQDNs: - matchName: "google.com" - matchPattern: "*.google.com" toPorts: - ports: - port: "80" protocol: TCP
Cilium inspects DNS responses and adds the resolved IPs for allowed FQDNs to a runtime allow list so traffic to those IPs is permitted.
Cilium defines entities for common endpoint types such as the Kubernetes API server, host, and world (the public internet). Use toEntities or fromEntities to reference them.
Default Kubernetes behavior allows node-local processes to talk to pods on the same node. You can change this behavior with Cilium’s option —allowed-localhost=policy, which lets policies control host <-> pod connections.
After enabling policy-controlled localhost behavior, allow host processes to talk to a pod explicitly:
When you expose a pod via a NodePort, the internet can reach it by default. If you attach a CiliumNetworkPolicy that denies ingress to that pod, the NodePort traffic will also be blocked unless you explicitly allow it. To permit ingress from the internet, allow the world entity.
Cilium supports explicit deny lists. Deny rules take precedence over allow rules, ensuring that disallowed sources remain blocked even if they match an allow rule.Example — allow frontend but explicitly deny pods labeled test: test (deny wins):
Standard Kubernetes NetworkPolicies are namespace-scoped. Cilium provides CiliumClusterwideNetworkPolicy for policies that apply across namespaces. Cluster-wide policies omit the namespace field in metadata and their endpointSelector matches endpoints cluster-wide.
Allow all backend pods cluster-wide to accept ingress from frontend pods cluster-wide:
To ensure all pods can resolve DNS, create a cluster-wide policy that matches all endpoints and allows egress to kube-dns (and any DNS names / external FQDNs your workloads require):