- Kubernetes architecture and its basic components
- Differences between Kubernetes and Docker Swarm
- How Kubernetes encapsulates containers within pods
- The roles of Deployments and Services in managing container lifecycles
- Hands-on examples for running an application on Kubernetes using YAML configuration files and command line tools
Kubernetes does not run containers directly on worker nodes; instead, it uses pods—the smallest deployable units that encapsulate one or more containers along with networking and storage resources.
Comparing Kubernetes and Docker Swarm
Previously, Docker Swarm enabled us to manage clusters of Docker hosts, automatically deploying container instances to ensure high availability and load balancing. In contrast, a Kubernetes cluster consists of a master node and multiple worker nodes. The master node orchestrates tasks such as scheduling, monitoring, restarting failed containers, and auto-scaling. The image below compares Docker Swarm and Kubernetes cluster architectures:
Kubernetes Pods: The Building Blocks
A fundamental difference between Docker Swarm and Kubernetes is the use of pods in Kubernetes. Each pod acts as a logical host for one or more containers. Although many applications maintain a one-to-one mapping between pods and containers, pods can also house multiple containers that work in close collaboration. For example, one container might manage web requests while another processes backend tasks, sharing the same network namespace and storage volumes. The following image illustrates a typical Kubernetes pod structure featuring two containers with shared storage and an assigned IP address on a worker node:
Deployments and Services in Kubernetes
In Docker Swarm, we relied on services to maintain high availability and load balancing by deploying multiple container instances. Kubernetes achieves a similar goal using Deployments, which create ReplicaSets to ensure that the desired number of pod instances are always running. Kubernetes Services provide network connectivity both internally and externally:- Internal-Facing Services: Manage communication between different pods (e.g., between a web service and its corresponding database).
- External-Facing Services: Use load balancers to expose specific ports for external access.



Managing Kubernetes with kubectl
Kubernetes is primarily managed using thekubectl command-line interface. Below are a few examples of common kubectl commands:
-
Creating a Pod from a Definition File:
-
Listing All Pods and Their Statuses:
-
Getting Detailed Information about a Specific Pod:
| Section | Description |
|---|---|
| version | Specifies the API version or schema version |
| kind | Defines the resource type (pod, deployment, service) |
| metadata | Contains data such as the name and labels |
| spec | Describes the resource configuration (images, ports, etc.) |
Creating a Simple Pod Definition
Below is an example of a Kubernetes YAML file that defines a basic pod running an Nginx container. The pod is named “my-nginx” and exposes port 80:pod-definition.yml) is created, deploy the pod using:
kubectl create command.
Deploying a Voting Application on Kubernetes
Let’s put theory into practice with an example: deploying a voting application on a Kubernetes cluster. In this scenario, assume your Kubernetes cluster is already operational. The objectives include deploying five different Docker containers for components such as Redis, Postgres (for the database), voting app, results app, and a worker application. The deployment process involves:- Creating pods for each component.
- Configuring internal services for pod-to-pod communication (e.g., ensuring the voting app connects to Redis).
- Setting up external load balancer services for the voting and results apps.

Ensure that your YAML definition files are correctly configured and validated before deploying to minimize errors during runtime.