CKA Certification Course - Certified Kubernetes Administrator
Networking
CNI in kubernetes
In this lesson, we explore how Kubernetes leverages the Container Network Interface (CNI) to manage container networking. You will gain an understanding of how network plugins are configured and used in a Kubernetes environment.
Earlier, we reviewed:
- The basics of networking and namespaces
- Docker networking fundamentals
- The evolution and rationale behind CNI
- A list of supported CNI plugins
Now, we focus on how Kubernetes configures the use of these network plugins.
As discussed in previous lessons, the CNI specifies the responsibilities of the container runtime. In Kubernetes, container runtimes such as Containerd or CRI-O create the container network namespaces and attach them to the correct network by invoking the appropriate network plugin. Although Docker was initially the primary container runtime, it has largely been replaced by Containerd as an abstraction layer.
Configuring CNI Plugins in Kubernetes
When a container is created, the container runtime invokes the necessary CNI plugin to attach the container to the network. Two common runtimes that demonstrate how this process works are Containerd and CRI-O.
Note
Container runtimes look for CNI plugin executables in the /opt/cni/bin
directory, while network configuration files are read from the /etc/cni/net.d
directory.
Directory Structure for CNI Plugins and Configuration
The network plugins reside in /opt/cni/bin
, and the configuration files that dictate which plugin to use are stored in /etc/cni/net.d
. Typically, the container runtime selects the configuration file that appears first in alphabetical order.
For example, you might see the following directories:
ls /opt/cni/bin
bridge dhcp flannel host-local ipvlan loopback macvlan portmap ptp sample tuning vlan weave-ipam weave-net weave-plugin-2.2.1
ls /etc/cni/net.d
10-bridge.conflist
In this case, the container runtime chooses the "bridge" configuration file.
Understanding a CNI Bridge Configuration File
A typical CNI bridge configuration file, adhering to the CNI standard, might look like this:
cat /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
In this configuration:
- The
"name"
field (e.g.,"mynet"
) represents the network name. - The
"type"
field set to"bridge"
indicates the use of a bridge plugin. - The
"bridge"
field (e.g.,"cni0"
) specifies the network bridge's name. - The
"isGateway"
flag designates whether the bridge interface should have an IP address to function as a gateway. - The
"ipMasq"
option enables network address translation (NAT) through IP masquerading. - The
"ipam"
(IP Address Management) section uses"host-local"
to allocate IP addresses from the specified subnet ("10.22.0.0/16"
) and defines a default route.
Key Configuration Options
Understanding these configuration fields is crucial for troubleshooting and optimizing Kubernetes networking. The settings in this bridge configuration align with fundamental networking concepts such as bridging, routing, and NAT masquerading.
This concludes our lesson on configuring CNI plugins with Kubernetes. We encourage you to apply these concepts through practical exercises to strengthen your Kubernetes networking skills.
For more information on related topics, consider reviewing:
Happy networking!
Watch Video
Watch video content