Azure Kubernetes Service

Networking in AKS

AKS Networking Options

When you provision an Azure Kubernetes Service (AKS) cluster, selecting the right network plugin is crucial for performance, security, and scalability. AKS supports two primary options:

  • Kubenet (Basic): Uses separate address spaces for nodes and pods, with NAT and user-defined routes (UDRs).
  • Azure CNI (Advanced): Integrates pods directly into your Azure Virtual Network (VNet), assigning them first-class IPs.

Both plugins install automatically on each node during cluster creation and handle pod-to-node and inter-node communication.


Kubenet Overview

With Kubenet, AKS allocates node IPs from your VNet subnet and assigns pods to a secondary, logically distinct address space (by default a /24 per node). Outbound traffic from pods uses SNAT on the node IP, and inter-node pod traffic flows via UDRs you configure.

The image illustrates networking options for AKS using KubeNet, showing two virtual machines (VM1 and VM2) each with two pods, connected through specific IP addresses and a UDR (User Defined Route).

For example, pod1 on VM1 sending packets to pod1 on VM2 is routed through a UDR on Subnet1, directing traffic to VM2, which then forwards to the target pod.

When pods access external resources, AKS applies SNAT:

The image illustrates networking options for AKS (Azure Kubernetes Service) using KubeNet, showing an AKS cluster with virtual machines and pods, and a frontend-backend network setup with SNAT (Source Network Address Translation).

In this scenario, a pod with IP 192.168.1.4 reaching a VM in another VNet (10.10.1.1) appears as the node’s IP and port to the destination.

Warning

Ensure your subnet and SNAT port allocation can handle peak pod-to-external flows. SNAT port exhaustion can disrupt outbound connectivity.


Azure CNI Overview

Azure CNI integrates directly into your Azure VNet, assigning both nodes and pods IPs from the same subnet. Pods become first-class citizens on the VNet, enabling native routing to other Azure services, on-premises systems, and peered VNets (subject to NSGs/UDRs).

The image is a diagram illustrating Azure CNI (Container Networking Interface) connectivity, showing how containers connect to Azure services, the internet, peer virtual networks, and on-premise systems. It includes icons representing various components like containers, virtual machines, and service endpoints.

Key benefits:

  • Native IPs for pods—no SNAT for intra-VNet traffic.
  • Automatic NSG/UDR enforcement on pod interfaces.
  • Compatibility with Ingress controllers, DNS, Kubernetes Network Policies, and Windows Server node pools.

Note

Plan VNet size carefully: Azure CNI consumes one IP per pod. A /24 supports up to 250 pods per node, so allocate your address space accordingly.


How Azure CNI Works

Azure CNI implements the CNI spec with two modules:

  1. Networking: Attaches a virtual network interface (vNIC) to each container.
  2. IP Address Management (IPAM): Allocates/deallocates IPs from the subnet pool.

The image is a diagram titled "CNI Modules," showing that CNI includes "Networking" and "IP Address Management (IPAM)" as components.

At cluster creation, Azure reserves a pool of secondary IPs on each node’s NIC equal to your max pods-per-node setting (e.g., 30 secondary IPs for 30 pods). When a container starts, Azure CNI assigns an IP from this pool to the pod’s vNIC; it returns to the pool on termination.

Under the hood, a VNet bridge orchestrates these assignments:

The image illustrates IP management in a network interface (NIC) with a VNet bridge, showing multiple IP addresses (IP0, IP1, IP2, IP3) and indicating Azure's support for 250 containers per pod/VM and 64,000 IPs per VNet.

Azure currently supports up to 250 pods per node and 64,000 IP addresses per VNet (a /16).


Comparing Kubenet and Azure CNI

The image is a comparison table between Kubenet and CNI, detailing their capabilities in areas like cluster deployment, connectivity, and network policies. It highlights differences in support for features such as virtual nodes and multiple clusters sharing one subnet.

FeatureKubenetAzure CNI
Pod IP allocationSecondary network (/24 per node)First-class VNet IP
Outbound NATSNAT on node IPNo SNAT for VNet traffic
NSG & UDR enforcementNodes onlyPods and nodes
Max pods per nodeLimited by secondary CIDR (/24 ≈ 250 pods)Up to 250 pods per node
Windows Server node poolsSupportedSupported (only CNI option)
Virtual nodesNot supportedSupported

When to Use Each Plugin

Use CaseRecommended Plugin
Limited VNet address spaceKubenet
No need for pod inbound connectivityKubenet
Can tolerate SNAT overheadKubenet
Pods require direct VNet routingAzure CNI
Advanced Azure Network PoliciesAzure CNI
Virtual nodes or Windows poolsAzure CNI

Watch Video

Watch video content

Previous
Azure Networking Fundamentals