Certified Kubernetes Application Developer - CKAD
State Persistence
Solution Persistent Volumes and Persistent Volume Claims optional
In this lab exercise, you will learn how to configure persistent storage for your Kubernetes workloads by deploying a Pod that writes log entries to a file, configuring a hostPath volume to persist logs, creating a Persistent Volume (PV) and Persistent Volume Claim (PVC), resolving binding issues due to access mode mismatches, updating the Pod to use the PVC, and finally exploring the reclaim policy behavior upon deletion.
Viewing Application Logs Inside the Pod
After deploying the web application Pod, you can check its status and view the logs generated by the application. The application writes its log entries to /log/app.log
within the container. To view these logs, run the following command:
kubectl exec webapp -- cat /log/app.log
This command displays the log entries generated by your application. If the Pod were deleted, the logs stored within the container would be lost since they reside on the default ephemeral volume provided by Kubernetes. To verify the Pod configuration, use:
kubectl describe pod webapp
Notice that there is no additional volume configured to persist the logs.
Configuring a HostPath Volume for Log Storage
Currently, the /log/app.log
file exists only within the container’s file system. To ensure log persistence even when the Pod is recreated, you need to configure a hostPath volume. This involves mounting a directory from the host system (for example, /var/log/webapp
) into the container.
Verify the Host Directory
First, ensure that the host directory exists and is empty:
ls /var/log/webapp
Update the Pod Configuration
Next, edit the Pod configuration to add a volume named
log-volume
of type hostPath:apiVersion: v1 kind: Pod metadata: name: webapp namespace: default spec: containers: - name: event-simulator image: kodekloud/event-simulator imagePullPolicy: Always env: - name: LOG_HANDLERS value: file volumeMounts: - name: kube-api-access-lmntb mountPath: /var/run/secrets/kubernetes.io/serviceaccount readOnly: true - name: log-volume mountPath: /log dnsPolicy: ClusterFirst nodeName: controlplane restartPolicy: Always volumes: - name: kube-api-access-lmntb projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - name: log-volume hostPath: path: /var/log/webapp
Apply the Configuration
Save your changes. In some cases, direct edits to a running Pod may be rejected. If so, force the update by saving the configuration to a temporary file and executing:
kubectl replace --force -f /tmp/kubectl-edit-XXXX.yaml
After the Pod is recreated with the new volume, verify that the logs are now being written to the host’s /var/log/webapp/app.log
file.
Creating a Persistent Volume (PV)
To leverage persistent storage, create a Persistent Volume. Here is an example PV specification using a hostPath:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-log
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /pv/log
Save the file as
pv.yaml
.Create the PV using:
kubectl create -f pv.yaml
Verify that the PV is available:
kubectl get pv
Creating a Persistent Volume Claim (PVC)
Now, create a Persistent Volume Claim to request storage from the configured PV. Initially, your PVC might look like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-log-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
Save the file as
pvc.yaml
.Apply it with:
kubectl create -f pvc.yaml
Check the state of the PVC:
kubectl get pvc
At this point, you might notice that the PVC is still in a Pending state due to an access mode mismatch. Although the PV has a capacity of 100Mi, the PVC requests ReadWriteOnce
while the PV supports ReadWriteMany
.
Resolving Access Mode Mismatch
To resolve the pending state, update the PVC to request ReadWriteMany
access mode, ensuring that it correctly binds to the PV.
Edit the pvc.yaml
file accordingly and execute:
kubectl replace --force -f pvc.yaml
Verify that the PVC and PV are now correctly bound:
kubectl get pv
kubectl get pvc
You should see output similar to:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
pv-log 100Mi RWX Retain Bound default/claim-log-1 <none> 5m5s
And:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim-log-1 Bound pv-log 100Mi RWX <none> 26s
Updating the Pod to Use the PVC
Update the webapp Pod configuration to mount the storage from the PVC instead of relying on the hostPath directly. Here is the updated volume configuration snippet for the Pod:
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
imagePullPolicy: Always
env:
- name: LOG_HANDLERS
value: file
volumeMounts:
- name: kube-api-access-lmntb
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readOnly: true
- name: log-volume
mountPath: /log
dnsPolicy: ClusterFirst
nodeName: controlplane
restartPolicy: Always
volumes:
- name: kube-api-access-lmntb
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- name: log-volume
persistentVolumeClaim:
claimName: claim-log-1
Replace the previous Pod definition by forcing a replacement if necessary:
kubectl replace --force -f <your-updated-pod-file.yaml>
After the update, verify that logs are now available on the host through the PV by checking /pv/log/app.log
.
Examining the Reclaim Policy and Cleanup Behavior
The persistent volume pv-log
is configured with a reclaim policy of Retain. This means that if the PVC is deleted, the PV is not automatically removed; it remains in the cluster with a status of Released
. Check its status with:
kubectl get pv pv-log
Expected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
pv-log 100Mi RWX Retain Bound default/claim-log-1 <none> 9m47s
Deleting Resources
If you attempt to delete the PVC while it is still mounted by an active Pod, it may remain in a terminating state until the Pod is deleted. To fully remove the PVC and allow the PV to transition to a Released
state, delete the associated Pod first.
For example:
Delete the PVC:
kubectl delete pvc claim-log-1
If the PVC appears stuck, delete the Pod:
kubectl delete pod webapp
Finally, check the PV status to confirm it has transitioned to Released
:
kubectl get pv pv-log
This lab exercise has guided you through configuring a hostPath volume for log persistence, creating and binding a PV and PVC, updating a Pod to utilize the PVC, and understanding the effects of the reclaim policy on cleanup behavior. For more detailed Kubernetes concepts and examples, consider exploring the official Kubernetes Documentation.
Watch Video
Watch video content