GKE - Google Kubernetes Engine

Plan Deploy And Manage Workloads On GKE

Control workload deployments using node taints

Node taints and pod tolerations in Google Kubernetes Engine (GKE) give you fine-grained control over where your workloads run. By marking nodes with taints and adding matching tolerations to pods, you ensure that only eligible pods land on specific nodes.

Taints and Tolerations Analogy

Imagine a birthday party with two zones: a colorful play area for kids and a quiet lounge for adults. You hand out blue bracelets to kids and green bracelets to adults so everyone stays in the right spot. In GKE:

  • Nodes are the party zones.
  • A taint on a node labels its “zone” (e.g., Music Area, Reading Area).
  • A pod’s toleration is its bracelet—pods with a matching toleration can be scheduled on that node.

The image is an overview diagram of node taints and tolerations in Google Kubernetes Engine (GKE), showing two nodes with specific taints and corresponding tolerations for "Music Area" and "Reading Area."

Pods with a music-area toleration will only run on nodes tainted for music, just like kids gathering in their play zone. Pods tolerating reading-area run on the quiet nodes.

Autopilot vs. Standard Clusters

Depending on your cluster mode, taint configuration changes:

Cluster ModeNode ManagementTaint SetupAutomation
AutopilotFully managed by GKETaints applied automaticallyGKE assigns taints at scale
StandardUser-defined node poolsYou add taints when creating poolsYou must update taints manually

The image is an overview of node taints in Google Kubernetes Engine (GKE), comparing Autopilot and Standard modes, highlighting tasks like node provisioning, scheduling, taint, and toleration.

  • In Autopilot, GKE handles node lifecycles and taints based on pod requirements.
  • In Standard mode, you configure node pools, labels, and taints yourself.

Why Use Node Taints?

Taints help isolate workloads with specific needs—whether resources, hardware, or compliance:

The image is a diagram explaining the need for node taint in GKE, highlighting workloads like batch coordination, game server matchmaking, server/database separation, and compliance reasons.

Batch Coordinator Workloads

Time-sensitive, resource-intensive jobs should run on dedicated nodes to avoid interference.

The image is a diagram illustrating "Node Taint" with a focus on "Batch Coordinator Workload," highlighting aspects like being time-intensive, requiring significant resources, and ensuring no interference.

Game Server Matchmaking

Low-latency and specialized hardware are critical for matchmaking. A unique taint guarantees these pods land on the right machines.

The image is a diagram illustrating a "Node Taint" concept, showing a game server with matchmaking workload, labeled with "Unique Taint," and highlighting features like low latency and specialized hardware.

Server–Database Separation

By tainting web servers and database nodes differently, you prevent resource contention and improve performance.

  • Web server nodes: server-role=web:NoSchedule
  • Database nodes: server-role=db:NoSchedule

Compliance and Policy Requirements

Some workloads must adhere to privacy regulations or internal policies. Assign compliance-specific taints to enforce workload isolation.

The image is a diagram titled "Node Taint," illustrating the concept of compliance and policy reasons, with two subcategories: privacy requirements and regulatory constraints.

Note

Node taints are not a security boundary. For untrusted workloads or strict isolation, use network policies, dedicated clusters, or virtualization.

Applying Node Taints in GKE

You can apply taints directly with kubectl or configure them in GKE for greater reliability:

  • GKE Console / gcloud
  • kubectl taint
  • Terraform or other IaC tools

The image illustrates ways of applying node taints in Kubernetes, highlighting "kubectl taint" and features like taint persistence, automatic taint creation, and seamless cluster autoscaling.

Example: Tainting with kubectl

# Taint a node so no pods schedule unless they tolerate it
kubectl taint nodes node-1 dedicated=experimental:NoSchedule

# Verify the taint
kubectl describe node node-1 | grep Taints

Warning

If you remove a taint via kubectl on a managed node pool, GKE will not reapply it after a restart. Always define critical taints at the pool or cluster level in GKE.

Watch Video

Watch video content

Previous
Migrate workloads to GKE