In this tutorial, you’ll learn how to control pod placement in Google Kubernetes Engine (GKE) by applying node taints and tolerations . We’ll cover:
Creating a GKE cluster with a tainted default node pool
Adding an untainted node pool
Updating node pool taints post‐creation
Provisioning a dedicated node pool with a unique taint
Deploying pods with and without tolerations to observe scheduling behavior
Removing a taint and watching pods land on the default pool
Prerequisites
Google Cloud SDK installed or access to Cloud Shell .
Authentication configured (gcloud auth login).
Set your project and compute zone:
gcloud config set project YOUR_PROJECT_ID
gcloud config set compute/zone us-west1-a
Node Pools & Taints Overview
Node Pool Taint Purpose default function=research:PreferNoScheduleResearch workloads gke-deep-dive-pool none Shared/utility services gke-deep-dive-pool (updated) function=shared:NoScheduleShared services gke-deep-dive-pool-dedicated dedicated=dev:NoExecuteDevelopment workloads
1. Create a Cluster with a Tainted Default Node Pool
gcloud container clusters create gke-deep-dive \
--num-nodes=1 \
--disk-type=pd-standard \
--disk-size=10 \
--node-taints=function=research:PreferNoSchedule
Cluster provisioning can take 10–15 minutes . Use gcloud container operations list to track progress.
Verify the Taint
kubectl get nodes
kubectl describe node < NODE_NAM E > | grep -A1 "Taints"
Expected output:
Taints:
function=research:PreferNoSchedule
2. Add an Untainted Node Pool
New node pools inherit no taints by default. Create one:
gcloud container node-pools create gke-deep-dive-pool \
--cluster=gke-deep-dive \
--num-nodes=1 \
--disk-type=pd-standard \
--disk-size=10
Verify:
kubectl get nodes
kubectl describe node < NEW_NODE_NAM E > | grep -A1 "Taints"
Expected:
3. Update a Node Pool’s Taint
Apply a new taint to the existing pool:
gcloud beta container node-pools update gke-deep-dive-pool \
--cluster=gke-deep-dive \
--node-taints=function=shared:NoSchedule
Verify:
kubectl describe node < NEW_NODE_NAM E > | grep -A1 "Taints"
Expected:
Taints:
function=shared:NoSchedule
4. Create a Dedicated Node Pool with a Different Taint
Provision a third pool for development workloads:
gcloud container node-pools create gke-deep-dive-pool-dedicated \
--cluster=gke-deep-dive \
--num-nodes=1 \
--disk-type=pd-standard \
--disk-size=10 \
--node-taints=dedicated=dev:NoExecute
5. Deploy a Pod That Tolerates the Shared Taint
Save as shared-pod.yaml:
apiVersion : v1
kind : Pod
metadata :
name : shared-pod
spec :
containers :
- name : nginx
image : nginx
tolerations :
- key : "function"
operator : "Equal"
value : "shared"
effect : "NoSchedule"
Apply and inspect scheduling:
kubectl apply -f shared-pod.yaml
kubectl get pods -o wide
The pod should land on the node with function=shared:NoSchedule.
6. Remove the Taint from the Default Node Pool
First, identify the default node:
Then remove its taint:
kubectl taint nodes < DEFAULT_NODE_NAM E > function-
Verify:
kubectl describe node < DEFAULT_NODE_NAM E > | grep -A1 "Taints"
Expected:
Removing taints allows all untolerated pods to schedule on this node pool. Plan accordingly.
7. Deploy a Pod Without Any Toleration
Save as dedicated-pod.yaml:
apiVersion : v1
kind : Pod
metadata :
name : dedicated-pod
spec :
containers :
- name : nginx
image : nginx
Apply and observe:
kubectl apply -f dedicated-pod.yaml
kubectl get pods
Since the default pool is now untainted, dedicated-pod transitions from Pending to Running on the default node.
Congratulations! You’ve successfully used node taints and tolerations to control pod placement in a GKE cluster.
Links and References