CKA Certification Course - Certified Kubernetes Administrator
Networking
Solution Networking Weave optional
In this lesson, we explore key aspects of the Kubernetes cluster networking configuration. We will answer several questions regarding node count, networking solutions, Weave deployment details, and pod network settings.
1. How Many Nodes are in the Cluster?
To determine the total number of nodes in the Kubernetes cluster, use the following command:
kubectl get node
The output shows two nodes—one named "controlplane" and one named "node01":
controlplane ➜ kubectl get node
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 37m v1.27.0
node01 Ready <none> 37m v1.27.0
Note
The command output indicates that the cluster consists of both control and worker nodes.
2. What is the Networking Solution Used by the Cluster?
The networking solution is identified by inspecting the CNI (Container Network Interface) configuration files. First, navigate to the directory containing these configuration files:
controlplane ➜ cd /etc/cni/net.d
controlplane /etc/cni/net.d ➜ ls
10-weave.conflist
Now, view the contents of the Weave configuration file:
controlplane /etc/cni/net.d ➜ cat 10-weave.conflist
{
"cniVersion": "0.3.0",
"name": "weave",
"plugins": [
{
"name": "weave",
"type": "weave-net",
"hairpinMode": true
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}
controlplane /etc/cni/net.d ➜
From the configuration, it is clear that the cluster uses Weave as its networking solution.
3. How Many Weave Agents are Deployed in the Cluster?
To find out how many Weave agents (pods) are running in the cluster, list all pods in the kube-system
namespace:
kubectl get pods -n kube-system
The output includes two Weave pods:
NAME READY STATUS RESTARTS AGE
coredns-5d78c9869d-597bx 1/1 Running 0 38m
coredns-5d78c9869d-cqgw8 1/1 Running 0 38m
etcd-controlplane 1/1 Running 0 38m
kube-apiserver-controlplane 1/1 Running 0 38m
kube-controller-manager-controlplane 1/1 Running 0 38m
kube-proxy-45bqq 1/1 Running 0 38m
kube-proxy-wjvbv 1/1 Running 0 38m
kube-scheduler-controlplane 1/1 Running 0 38m
weave-net-mknr5 2/2 Running 0 38m
weave-net-qgk9 2/2 Running 0 38m
There are two Weave pods deployed, indicating one Weave agent per node.
4. On Which Nodes are the Weave Pods Running?
To check on which nodes the Weave agents are running, include the -o wide
option when listing pods:
kubectl get pods -n kube-system -o wide
The output shows:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5d78c9869d-597bx 1/1 Running 0 38m 10.244.0.2 controlplane <none> <none>
coredns-5d78c9869d-cqgw8 1/1 Running 0 38m 10.244.0.3 controlplane <none> <none>
etcd-controlplane 1/1 Running 0 38m 192.6.255.3 controlplane <none> <none>
kube-apiserver-controlplane 1/1 Running 0 38m 192.6.255.3 controlplane <none> <none>
kube-controller-manager-controlplane 1/1 Running 0 38m 192.6.255.3 controlplane <none> <none>
kube-proxy-45bqq 1/1 Running 0 38m 192.6.255.3 node01 <none> <none>
kube-proxy-wjvbv 1/1 Running 0 38m 192.6.255.3 node01 <none> <none>
kube-scheduler-controlplane 1/1 Running 0 38m 192.6.255.3 controlplane <none> <none>
weave-net-mknr5 2/2 Running 0 38m 192.6.255.3 controlplane <none> <none>
weave-net-qgk9 2/2 Running 0 38m 192.6.255.5 node01 <none> <none>
Here, "weave-net-mknr5" is running on the "controlplane" node, and "weave-net-qgk9" is running on "node01". This confirms that each node hosts one Weave agent.
5. What is the Name of the Bridge Network Interface Created by Weave on Each Node?
To verify the network interfaces on a node, run:
ip add
A sample output from the "controlplane" node appears as follows:
controlplane /etc/cni/net.d ➜ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: datapath <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default
link/ether 7a:30:47:3e:64:61 brd ff:ff:ff:ff:ff:ff
4: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default
link/ether 76:54:21:d6:20:ba brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/16 brd 10.244.255.255 scope global weave
valid_lft forever preferred_lft forever
...
controlplane /etc/cni/net.d ➜
The interface named "weave" is the bridge network interface created by Weave.
6. What is the Pod IP Address Range Configured by Weave?
The assigned IP address range for pods can be observed from the network interface details of the "weave" bridge:
inet 10.244.0.1/16 brd 10.244.255.255 scope global weave
To further verify, check the logs of one of the Weave pods:
kubectl logs -n kube-system weave-net-mknr5
Within the logs, you will find an entry such as:
INFO: 2023/07/26 03:42:17.199330 Command line options: map[... ipalloc-range:10.244.0.0/16 ... ]
...
10.244.0.1
This confirms that the pod IP address range is set to 10.244.0.0/16.
Tip
Verifying both interface settings and pod logs can help ensure the accuracy of the IP allocation range.
7. What is the Default Gateway for Pods Deployed on Node "node01"?
To determine the default gateway for pods running on "node01," follow these steps:
Step 1: Create a Busybox Pod Manifest
Generate a YAML configuration for a simple Busybox pod using the following command:
kubectl run busybox --image=busybox --dry-run=client -o yaml -- sleep 1000 > busybox.yaml
Next, edit the busybox.yaml
file to ensure the pod is scheduled on "node01" by adding the nodeName
field within the spec:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
nodeName: node01
containers:
- args:
- sleep
- "1000"
image: busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Step 2: Deploy the Busybox Pod
Apply the configuration file:
kubectl apply -f busybox.yaml
Verify that the pod is running:
kubectl get pod
Step 3: Inspect the Routing Table Inside the Pod
Once the Busybox pod is running, execute this command to view its routing table:
kubectl exec busybox -- ip route
The output should display the default route, for example:
default via 10.244.192.0 dev eth0
10.244.0.0/16 dev eth0 scope link src 10.244.192.1
This shows that the default gateway for pods on node "node01" is 10.244.192.0.
Warning
Ensure that the Busybox pod is scheduled on the intended node by verifying the nodeName
field in your YAML configuration.
Summary
In this lab, we verified several important network configurations:
Configuration Aspect | Outcome |
---|---|
Number of nodes in the cluster | Two nodes: "controlplane" and "node01" |
Networking solution used | Weave |
Number of Weave agents | Two agents (one per node) |
Bridge network interface name created by Weave | "weave" |
Pod IP address range | 10.244.0.0/16 |
Default gateway for pods on "node01" | 10.244.192.0 |
For additional information on Kubernetes networking, you can explore the Kubernetes Documentation.
Happy networking!
Watch Video
Watch video content