Nginx For Beginners
Performance
Caching
Caching helps you serve repeated requests faster by storing frequently accessed content closer to the client. Think of it like ordering the same coffee and cookie every morning: at first, the barista prepares your order on demand, but over time they have it ready before you even ask. Similarly, Nginx can remember—and quickly deliver—static or dynamic resources without hitting your backend every time.
When a user revisits a website, cached assets are served directly, reducing backend load and improving response times.
Nginx can act as both a reverse proxy and a cache server. It sits in front of your application, intercepting requests and serving cached responses whenever possible.
Core Proxy Cache Configuration
Below is a minimal nginx.conf
snippet illustrating the main caching directives:
http {
proxy_cache_path /var/lib/nginx/cache
levels=1:2
keys_zone=app_cache:8m
max_size=50m;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
}
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_cache app_cache;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_pass http://backend;
}
}
proxy_cache_path
Defines where and how cache files are stored:
Parameter | Description |
---|---|
/var/lib/nginx/cache | Filesystem path for cached data (ensure correct permissions). |
levels=1:2 | Two-level directory structure to avoid too many files per folder. |
keys_zone=app_cache:8m | Shared memory zone app_cache using 8 MB for cache keys and metadata. |
max_size=50m | Optional limit; older entries are removed when the cache exceeds this size. |
Note
Verify that the Nginx user has read/write access to your cache directory (/var/lib/nginx/cache
by default).
proxy_cache_key
By default, Nginx uses the full request URL as the cache key. You can customize it to include protocol, HTTP method, host, and URI for better uniqueness:
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_valid
Control how long different response codes are kept in cache:
proxy_cache_valid 200 302 10m; # Cache successful and redirects for 10 minutes
proxy_cache_valid 404 1m; # Cache not-found responses for 1 minute
Optional: Bypassing the Cache
For certain dynamic routes or when clients send cache-control headers, you may want to skip cache lookup:
location /api/ {
proxy_cache app_cache;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_pass http://backend;
}
Warning
Overusing proxy_cache_bypass
will reduce cache effectiveness and increase load on your backend. Use it only when necessary.
Verifying Cache Behavior
Inspect the X-Proxy-Cache
header in your HTTP response to determine whether content was served from cache:
X-Proxy-Cache: HIT # Served from cache
X-Proxy-Cache: MISS # Fetched from backend
Example response headers:
Accept-Ranges: bytes
Age: 10
Cache-Control: s-maxage=60
Content-Encoding: gzip
Content-Length: 103408
Content-Type: text/html; charset=iso-8859-15
Date: Fri, 24 Jan 2025 17:25:21 GMT
Vary: Accept-Encoding, User-Agent
X-Backend: CONTENT
X-Proxy-Cache: HIT
X-Cache-Hits: 1, 1
X-Served-By: cache-ams21071-AMS
X-Timer: S17337739521.335220,VS0,VE2
Use browser developer tools or your server logs to monitor cache hits and misses. A HIT
indicates a cached response, while a MISS
shows that Nginx forwarded the request upstream.
Further Reading
Watch Video
Watch video content