Docker Certified Associate Exam Course

Docker Engine Security

Resource Limits CPU

In this lesson, we’ll explore how to control CPU resource usage for Docker containers using cgroups. Without limits, containers share the host kernel and can consume all available CPU and memory. If memory runs out, the Linux kernel’s OOM killer may terminate processes—including critical services—to free resources.

The image illustrates container memory limits and reservations, showing CPU and memory usage bars for a web application container on a Docker host.

Understanding how the Linux scheduler allocates CPU time is key to setting effective limits.

CPU Scheduling on Linux

On a host with a single CPU core, two processes cannot truly run simultaneously. The Linux scheduler gives each process a tiny time slice (measured in microseconds), switching rapidly so they appear parallel. When one process has more weight, it receives more or longer slices.

The image illustrates a concept of CPU sharing in Linux, showing a Docker host with two processes sharing a CPU.

CPU Shares and Weights

CPU shares are relative weights, not hard caps. For example:

  • Process A: 1024 shares
  • Process B: 512 shares

When both run, A gets twice the CPU time B gets. If A runs alone, it can still use 100% of the CPU despite its share count.

Docker leverages the Completely Fair Scheduler (CFS) by default. While Docker 1.13+ supports a real-time scheduler, we’ll focus on CFS here.

The image illustrates Linux CPU sharing using the Completely Fair Scheduler (CFS), featuring a Docker host with a CPU and a process labeled "Process 2" with a value of 512.

Applying CPU Shares to Containers

By default, each container gets 1024 CPU shares. To change this weight:

docker run --cpu-shares=512 --name webapp4 webapp

In this example, webapp4 has half the CPU weight of any container using the default 1024 shares.

The image illustrates CPU shares in containers using a Completely Fair Scheduler (CFS), showing three processes within containers on a Docker host.

Note

CPU shares only affect relative scheduling. A container with fewer shares can still use 100% of CPU if it’s the only active workload.

Restricting Containers to Specific CPUs

To pin a container to specific cores, use --cpuset-cpus. On a 4-core host (0–3):

docker run --cpuset-cpus="0-1" --name webapp1 webapp
docker run --cpuset-cpus="0-1" --name webapp2 webapp
docker run --cpuset-cpus="2"   --name webapp3 webapp
docker run --cpuset-cpus="2"   --name webapp4 webapp

Here, webapp1 and webapp2 run only on cores 0 and 1; webapp3 and webapp4 on core 2. Core 3 stays free for other tasks.

Limiting CPU Usage with --cpus

Since Docker 1.13, you can enforce a hard CPU cap:

docker run --cpus=2.5 --name webapp4 webapp

This restricts webapp4 to 2.5 CPU cores (≈62.5% of a 4-core host). To update an existing container:

docker update --cpus=2.5 webapp4

Warning

Without proper CPU limits, a container can monopolize the host CPU, starving other containers or the Docker daemon and causing an unresponsive system.

CPU Limit Configuration Options

OptionDescriptionExample
--cpu-sharesRelative CPU weight (default: 1024 shares)docker run --cpu-shares=512 nginx
--cpuset-cpusPin container to specific CPU coresdocker run --cpuset-cpus="0-1" nginx
--cpusHard limit on number of CPU coresdocker run --cpus=2.5 nginx

Watch Video

Watch video content

Previous
CGroups