This article provides step-by-step solutions for Kubernetes cluster configuration and troubleshooting across various scenarios.
This article provides detailed step-by-step solutions for Mock Exam Two. Each question covers a key aspect of Kubernetes cluster configuration—from creating storage classes and deployments to configuring ingress, RBAC, network policies, HPA setups, and troubleshooting common issues. Follow the solutions below to configure your clusters and gain hands-on experience with Kubernetes.
Question 1 – Creating a Default Local StorageClass
In this question, you will create a StorageClass named local-sc on Cluster One’s control plane. This StorageClass must be set as the default with the following criteria:
SSH into your control plane:
Copy
Ask AI
ssh cluster1-controlplaneWelcome to Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-1078-gcp x86_64) * Documentation: https://help.ubuntu.com/ * Management: https://landscape.canonical.com * Support: https://ubuntu.com/proThis system has been minimized by removing packages and content that are not required on a system that users do not log into.To restore this content, you can run the 'unminimize' command.
Referencing the documentation, copy the example configuration and update it as follows:
Change the name to local-sc.
Update the annotation storageclass.kubernetes.io/is-default-class to "true".
Set the provisioner to kubernetes.io/no-provisioner.
Remove any unnecessary fields (such as reclaimPolicy and mountOptions) not specified.
Retain only allowVolumeExpansion and volumeBindingMode: WaitForFirstConsumer.
The final YAML configuration should look like this:
Question 2 – Deployment with App and Sidecar Containers for Logging
This question guides you through creating a deployment named logging-deployment in the logging-ns namespace. The deployment uses one replica and runs two containers within the same pod:
app-container: Uses the busybox image to continuously append log entries to /var/log/app/app.log.
log-agent: Also based on busybox, this container tails the same log file.
Both containers share an emptyDir volume mounted at /var/log/app.Below is the complete YAML configuration (save as question2.yaml):
Question 3 – Creating an Ingress Resource to Route Traffic
Here, you will create an Ingress resource named webapp-ingress in the ingress-ns namespace. This ingress routes traffic to a service called webapp-svc, using the NGINX ingress controller with the following criteria:
Hostname: kodekloud-ingress.app
Path: / (with PathType: Prefix)
Forward traffic to webapp-svc on port 80
Below is the YAML configuration (save as question3.yaml):
Part A: Create a CertificateSigningRequest (CSR) for User “john”
The private key is located at /root/cka/john.key and the CSR file at /root/cka/john.csr. Base64 encode the CSR file content for use in your CSR object.
The CSR object should use the signer kubernetes.io/kube-apiserver-client and specify these usages: digital signature, key encipherment, and client auth.
Part B: Grant RBAC Permissions with Role and RoleBinding
Create a Role named developer in the development namespace to allow the following verbs on pods: create, list, get, update, and delete. Then, bind the role to user john.Save the following YAML as question5-rbac.yaml:
Question 6 – Creating an Nginx Pod with an Internal Service and DNS Testing
In this exercise, you will deploy an Nginx pod named nginx-resolver and expose it internally using a ClusterIP service named nginx-resolver-service. Then you will verify DNS resolution using a BusyBox pod.
Create the Nginx pod:
Copy
Ask AI
kubectl run nginx-resolver --image=nginx
Expose the pod internally:
Copy
Ask AI
kubectl expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP
Verify the service endpoints:
Copy
Ask AI
kubectl get svc nginx-resolver-servicekubectl get pod -o wide
Run a temporary BusyBox pod to perform an nslookup:
Copy
Ask AI
kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service
Optionally, redirect the DNS lookup results to a file (e.g., /root/cka/nginx.svc) if necessary. Remember that Kubernetes creates DNS entries for both pods and services.
Create a static pod named nginx-critical on Node One. This pod should automatically restart on failure and must reside in the /etc/kubernetes/manifests directory.
Generate a pod YAML definition via dry run:
Copy
Ask AI
kubectl run nginx-critical --image=nginx --restart=Always --dry-run=client -o yaml > static.yaml
SSH into Node One:
Copy
Ask AI
ssh cluster1-node01
Place the YAML file into /etc/kubernetes/manifests/static.yaml with contents similar to:
Question 8 – Creating a Horizontal Pod Autoscaler (HPA)
Create an HPA for a deployment named backend-deployment in the backend namespace. The HPA should target an average memory utilization of 65% across all pods, with a minimum of 3 replicas and a maximum of 15.Below is the example HPA YAML (save as webapp-hpa.yaml):
Question 9 – Troubleshooting a Non-Responsive API Server
If Cluster Two fails to respond to kubectl commands (e.g., with the error message “The connection to the server cluster2-controlplane:6443 was refused”), perform the following steps:
Run a command such as:
Copy
Ask AI
kubectl get nodes
which may yield:
Copy
Ask AI
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
Use crictl pods to check the running containers and notice that the API server container is missing.
Modify the existing web gateway in the cka5673 namespace on Cluster One so that it handles HTTPS traffic on port 443 for kodekloud.com using TLS certificates stored in the kodekloud-tls secret.
Verify that the secret exists:
Copy
Ask AI
kubectl get secret -n cka5673
Retrieve the current gateway configuration:
Copy
Ask AI
kubectl get gateway -n cka5673 -o yaml > question10.yaml
Edit the YAML file to update the listener section. Change to the following:
Implement a network policy that allows traffic from frontend applications (namespace frontend) to backend applications (namespace backend), while blocking traffic from the databases (namespace databases).After reviewing the provided policies, net-pol-3.yaml is the correct choice. Its contents are as follows:
Question 13 – Troubleshooting a Failed Deployment Due to Resource Quota
On Cluster Three, if the backend-api deployment fails to scale to three replicas due to resource quota limitations, follow these troubleshooting steps:
Describe the deployment to see error events:
Copy
Ask AI
kubectl describe deployment backend-api
Also check the ReplicaSets for error events indicating that pod creation is forbidden by resource quotas.
Describe the resource quota:
Copy
Ask AI
kubectl describe resourcequota cpu-mem-quota
An example output might be:
Copy
Ask AI
Name: cpu-mem-quotaNamespace: defaultResource Used Hard------------------------------requests.cpu 200m 300mrequests.memory 256Mi 300Mi
If the new pod’s request (e.g., 128Mi) exceeds the quota, adjustments are needed.
Update the deployment’s resource requests to ensure the total for three replicas remains within the limits. For instance, reduce the memory request from 128Mi to 100Mi:
To verify that Calico is operating correctly and pod-to-pod communication works, deploy a test pod (such as an Nginx pod) and use a BusyBox or net-tools container for connectivity tests:
Copy
Ask AI
kubectl run web-app --image=nginxkubectl run test --rm -it --image=jrcs/net-tools --restart=Never -- sh -c "curl -s <web-app-pod-IP>"
If the connectivity test is successful, Calico is configured and running correctly.
This article provides a comprehensive walkthrough of essential Kubernetes configurations and troubleshooting steps. By following these step-by-step solutions, you can better understand Kubernetes components and improve your operational skills.
Happy learning and good luck on your Kubernetes journey!