- Server blocks and server_name
- Server blocks (virtual hosts) let a single Nginx instance host multiple sites.
- The
server_namedirective maps incoming requests (via the Host header) to the correct server block. - Configure
server_nameprecisely — include exact names, wildcards, and fallback defaults — to avoid requests being handled by the undesired server block.
Always set an explicit
server_name in each server block. If omitted or misconfigured, requests can be routed to the default server block, causing unexpected responses or security exposure.- Exact match:
- Wildcard subdomain:
- Default server:
- HTTP → HTTPS redirection
- Redirect all HTTP traffic to HTTPS with a minimal server block using a 301 or 308 redirect to preserve method semantics when appropriate.
- Preserve the Host header and request path using
$hostand$request_uri.
- Rewrite rules and regex
- Nginx supports
rewritedirectives and regex-basedlocationmatching for URL manipulation and redirects. - Prefer
try_filesfor resolving static files before using rewrites.try_filesis generally faster and simpler for static content and single-page apps.
- Nginx supports
- Upstreams and backend pools
- The
upstreamblock defines a pool of backend servers Nginx proxies to. - Use upstreams when reverse proxying or load balancing application backends, including health check integrations or server weighting.
- The
- Load balancing algorithms
- Choose a balancing method based on session persistence needs, backend capacity, and fault tolerance.
| Algorithm | Use Case | Notes |
|---|---|---|
| round-robin | Default, evenly distribute requests | Simple and effective for similar-capacity backends |
| weighted round-robin | Backends with different capacity | Assign weight per server to control share |
| ip_hash | Session stickiness by client IP | Useful for basic session affinity; can create uneven load with NAT/proxying |
IP-based sticky sessions (
ip_hash) can produce uneven load when many clients share an IP (NAT/proxies). For robust session persistence, prefer shared session stores (for example, Redis) or application-level affinity techniques.-
Reverse proxy vs. load balancer
- Reverse proxy: forwards client requests to one or more backend services. Single backend is sufficient.
- Load balancer: distributes traffic across multiple backends to provide scaling and failover. Requires two or more backends.
- Nginx can function as both; decide based on whether you need distribution and resilience (load balancer) or simple request forwarding/edge functionality (reverse proxy).
-
Caching
- Proper caching reduces backend load and improves latency. Configure
proxy_cache, cache-control headers, and cache keys to maintain correctness. - Consider cache invalidation strategies and TTLs to ensure clients receive fresh content when needed.
- Proper caching reduces backend load and improves latency. Configure
- Explicit
server_nameand a sensible default server. - Proper TLS termination and HTTP→HTTPS redirects.
- Use
try_filesfor static assets where possible. - Define upstreams for backend pools and pick an appropriate load balancing algorithm.
- Ensure proxy headers (Host, X-Forwarded-For, X-Forwarded-Proto) are set correctly.
- Implement caching with controlled TTLs and well-chosen cache keys.
- Reconsider
ip_hashfor session stickiness if clients are behind NAT — use shared session storage when necessary.
- Nginx Documentation
- Nginx Configuration Guide — server_name
- Redis — in-memory data store
- HTTP caching basics (MDN)