Nginx For Beginners

Performance

Connection Handling

In this guide, we’ll explore Nginx connection handling, from master–worker processes to keep-alive tuning, HTTP evolution, and TCP optimizations. By the end, you’ll know how to fine-tune your Nginx server for high concurrency and low latency.

Nginx Master and Worker Processes

Nginx follows a master–worker architecture:

  • The master process reads configuration files, manages worker lifecycles, and monitors for reloads.
  • Each worker process runs an event loop, independently handling client connections and events.

The image illustrates a connection handling process with a master process distributing requests to multiple worker processes, each containing an event loop to monitor events simultaneously.

By default, Nginx detects CPU cores and spawns one worker per core. On a 4-core machine, you get 4 workers handling connections in parallel.

Configuring Workers

Place the following in your nginx.conf:

worker_processes auto;

events {
    worker_connections 512;
}
  • worker_processes auto; auto-detects CPU cores.
  • worker_connections 512; sets max concurrent connections per worker (default: 512).

Warning

Setting worker_connections too high may exhaust file descriptors. Monitor with ulimit -n and adjust your OS limits accordingly.

The image shows a diagram labeled "Worker Processes" with an icon of a microchip connected to a box labeled "Worker."

On a 4-core VM:

  • Workers: 4
  • Connections: 4 × 512 = 2048

To support more clients, simply increase worker_connections:

events {
    worker_connections 1024;
}

Keep-Alive Connections

Persistent (keep-alive) connections reuse a single TCP socket for multiple HTTP requests, reducing handshake overhead:

The image illustrates a "Keep Alive" connection between a browser and a server, showing the transfer of files like JS, HTML, JSON, XML, and CSS, with benefits such as speeding up the process and reducing CPU and network overhead.

Add these directives in the http block:

http {
    # Max requests per keep-alive connection
    keepalive_requests 100;

    # Idle timeout (seconds) for keep-alive
    keepalive_timeout 65;
}

Upstream Keep-Alive

If Nginx proxies to backend servers, maintain idle connections upstream:

http {
    upstream backend {
        server 10.10.0.101:80;
        server 10.10.0.102:80;
        server 10.10.0.103:80;
        keepalive 32;
    }
}

Then, in your server block:

server {
    listen 80;

    location / {
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_pass http://backend;
    }
}
  • proxy_http_version 1.1; ensures HTTP/1.1 persistent connections.
  • proxy_set_header Connection ""; avoids sending Connection: close.

You can verify HTTP version with:

curl --head https://www.google.com

Note

Persistent upstream connections can significantly reduce backend latency. Always test under load!


Evolution of HTTP

HTTP has evolved to improve performance, security, and flexibility.

HTTP VersionKey FeaturesRelease
HTTP/0.9Simple GET, no headers or status codes1991
HTTP/1.0Added status codes (200, 404), GET/POST methods1996
HTTP/1.1Persistent connections, chunked transfer, caching, cookies, compression, pipelining1997
HTTP/2Multiplexing, header compression (HPACK), binary framing2015
HTTP/3QUIC over UDP, 0-RTT, built-in TLS 1.3, connection migration2020

The image illustrates the evolution of HTTP versions using a progression of human silhouettes, highlighting key features of HTTP 1.1 such as persistent connections, caching, cookies, compression, and reduced latency.

  • HTTP/2: Multiplex streams over one TCP connection; header compression.
  • HTTP/3: Runs on QUIC/UDP, reduces latency, supports connection migration.

The image illustrates the evolution of HTTP versions using a progression of human-like figures, with details about HTTP 3.0, including its features like being built on QUIC and requiring TLS 1.3.

TCP vs. UDP

ProtocolReliabilityUse CasesTransport Model
TCPOrdered, error-checkedWeb, email, file transferConnection-oriented
UDPUnordered, best-effortGaming, streaming, VoIPConnectionless

Currently, ~34.6% of sites use HTTP/2, 34.0% HTTP/3, and 31.4% still run HTTP/1.1.

The image shows two graphs depicting the historical trends in the usage of HTTP/2 and HTTP/3 on websites, with HTTP/2 used by 34.6% and HTTP/3 by 34.0% of websites.


Optimizing File Transfers with sendfile

Default file transfers read data into user space, then write to the network. This double-buffering adds CPU and memory overhead.

The image is a diagram illustrating the process of sending files in a Linux environment, showing interactions between user space, system call interface, Linux kernel, and hardware components.

Enable zero-copy to let the kernel send file data directly from disk to socket:

http {
    sendfile on;
}

This improves throughput and reduces CPU cycles.

Note

On some platforms (e.g., older BSD variants), sendfile may behave differently. Test before deploying to production.


Reducing TCP Overhead with tcp_nopush

Small TCP packets increase protocol overhead. tcp_nopush delays sending until a full packet is ready:

http {
    tcp_nopush on;
}

This directive groups headers and file chunks into larger packets, cutting network congestion.


By tuning these core options—master/worker processes, keep-alive, sendfile, and TCP flags—you’ll boost Nginx performance and resource efficiency.

Watch Video

Watch video content

Previous
Demo Compression