Nginx For Beginners
Performance
Connection Handling
In this guide, we’ll explore Nginx connection handling, from master–worker processes to keep-alive tuning, HTTP evolution, and TCP optimizations. By the end, you’ll know how to fine-tune your Nginx server for high concurrency and low latency.
Nginx Master and Worker Processes
Nginx follows a master–worker architecture:
- The master process reads configuration files, manages worker lifecycles, and monitors for reloads.
- Each worker process runs an event loop, independently handling client connections and events.
By default, Nginx detects CPU cores and spawns one worker per core. On a 4-core machine, you get 4 workers handling connections in parallel.
Configuring Workers
Place the following in your nginx.conf:
worker_processes auto;
events {
worker_connections 512;
}
worker_processes auto;
auto-detects CPU cores.worker_connections 512;
sets max concurrent connections per worker (default: 512).
Warning
Setting worker_connections
too high may exhaust file descriptors. Monitor with ulimit -n
and adjust your OS limits accordingly.
On a 4-core VM:
- Workers: 4
- Connections: 4 × 512 = 2048
To support more clients, simply increase worker_connections
:
events {
worker_connections 1024;
}
Keep-Alive Connections
Persistent (keep-alive) connections reuse a single TCP socket for multiple HTTP requests, reducing handshake overhead:
Add these directives in the http
block:
http {
# Max requests per keep-alive connection
keepalive_requests 100;
# Idle timeout (seconds) for keep-alive
keepalive_timeout 65;
}
Upstream Keep-Alive
If Nginx proxies to backend servers, maintain idle connections upstream:
http {
upstream backend {
server 10.10.0.101:80;
server 10.10.0.102:80;
server 10.10.0.103:80;
keepalive 32;
}
}
Then, in your server
block:
server {
listen 80;
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://backend;
}
}
proxy_http_version 1.1;
ensures HTTP/1.1 persistent connections.proxy_set_header Connection "";
avoids sendingConnection: close
.
You can verify HTTP version with:
curl --head https://www.google.com
Note
Persistent upstream connections can significantly reduce backend latency. Always test under load!
Evolution of HTTP
HTTP has evolved to improve performance, security, and flexibility.
HTTP Version | Key Features | Release |
---|---|---|
HTTP/0.9 | Simple GET, no headers or status codes | 1991 |
HTTP/1.0 | Added status codes (200, 404), GET/POST methods | 1996 |
HTTP/1.1 | Persistent connections, chunked transfer, caching, cookies, compression, pipelining | 1997 |
HTTP/2 | Multiplexing, header compression (HPACK), binary framing | 2015 |
HTTP/3 | QUIC over UDP, 0-RTT, built-in TLS 1.3, connection migration | 2020 |
- HTTP/2: Multiplex streams over one TCP connection; header compression.
- HTTP/3: Runs on QUIC/UDP, reduces latency, supports connection migration.
TCP vs. UDP
Protocol | Reliability | Use Cases | Transport Model |
---|---|---|---|
TCP | Ordered, error-checked | Web, email, file transfer | Connection-oriented |
UDP | Unordered, best-effort | Gaming, streaming, VoIP | Connectionless |
Currently, ~34.6% of sites use HTTP/2, 34.0% HTTP/3, and 31.4% still run HTTP/1.1.
Optimizing File Transfers with sendfile
Default file transfers read data into user space, then write to the network. This double-buffering adds CPU and memory overhead.
Enable zero-copy to let the kernel send file data directly from disk to socket:
http {
sendfile on;
}
This improves throughput and reduces CPU cycles.
Note
On some platforms (e.g., older BSD variants), sendfile
may behave differently. Test before deploying to production.
Reducing TCP Overhead with tcp_nopush
Small TCP packets increase protocol overhead. tcp_nopush
delays sending until a full packet is ready:
http {
tcp_nopush on;
}
This directive groups headers and file chunks into larger packets, cutting network congestion.
By tuning these core options—master/worker processes, keep-alive, sendfile, and TCP flags—you’ll boost Nginx performance and resource efficiency.
Links and References
Watch Video
Watch video content