Skip to main content
And that brings us to the end of this module — thanks again for joining. This lesson covered the core concepts you need to get started with NGINX and basic server management. Below is a concise, technically accurate recap that preserves the learning sequence and highlights practical commands and best practices.
  • Package managers
    • Use your distribution’s package manager (apt, yum, dnf, pacman, etc.) to install, upgrade, and remove software. Package managers handle dependencies and simplify maintenance compared with ad-hoc installations.
    • Compiling NGINX from source is possible and appropriate when you need custom modules or compile-time flags not available in distribution packages. For most production deployments, prefer your distribution’s prebuilt packages to reduce maintenance overhead.
If you need a custom NGINX build (special modules or custom compile flags), compiling from source is valid—just be aware it increases maintenance overhead and you’ll need to track updates yourself.
  • Installing NGINX
    • We demonstrated installing NGINX across Linux distributions; this module used Ubuntu for examples. Installation steps are similar across Linux families but use the package manager native to your distro.
    • Avoid running production NGINX on Windows or macOS. Most production NGINX servers run on Linux.
  • Managing NGINX processes and configuration
    • On modern Linux systems using systemd, these are the most common service commands:
ActionCommand
Start NGINXsudo systemctl start nginx
Stop NGINXsudo systemctl stop nginx
Restart NGINXsudo systemctl restart nginx
Enable at bootsudo systemctl enable nginx
  • Use NGINX signals and built-in checks for finer control:
# Test configuration syntax (always do this before reloading)
sudo nginx -t

# Gracefully reload (restarts worker processes without dropping connections)
sudo nginx -s reload

# Alternatively, use systemd to reload the service
sudo systemctl reload nginx
  • Always run nginx -t before reloading to catch syntax errors and avoid service disruptions.
  • nginx.conf structure and block contexts
    • nginx.conf is hierarchical and uses named contexts:
ContextPurpose
GlobalDirectives that affect the master process (user, pid, error_log, etc.)
eventsWorker connection limits and event model (worker_connections, use)
httpGlobal HTTP settings, MIME types, upstreams, and include directives
serverVirtual hosts; handles requests for a specific listen address/port
locationRequest routing within a server block (static files, proxied paths)
  • Server blocks enable hosting multiple sites on one NGINX instance (name-based virtual hosts). Organize includes (e.g., /etc/nginx/sites-enabled) to keep configuration modular.
  • server_name and default behavior
    • Always set server_name in each server block. If no server_name matches the Host header, NGINX uses the default server for the address:port — typically the first server block matching that listen socket (ordering can be influenced by include directives and file name order).
    • Explicit server_name entries prevent unexpected requests from being handled by the wrong server block.
  • Serving static files and simple site pages
    • NGINX is optimized for serving static assets (HTML, CSS, JS, images). To verify your configuration, a simple index.html in your server root is often sufficient.
    • Example: place index.html in your configured root directory and ensure index and root directives in your server block point to it.
  • Firewalls and network/port controls
    • Always enable and configure a host-level firewall (UFW, firewalld, iptables/nftables) to restrict access to only the ports and IPs you expect.
    • For public websites, open ports 80 (HTTP) and 443 (HTTPS). For private testing, whitelist your IP only.
ScenarioPorts to open
Public website80 (HTTP), 443 (HTTPS)
Admin-only testingSingle source IP — only necessary ports (e.g., 22 for SSH, 80/443 if testing)
Health checks / load balancerOpen health-check port(s) as required by your infrastructure
  • Don’t forget cloud-provider firewalls/security groups (GCP, Azure, AWS) — these are separate from the host firewall and must be configured to allow the same traffic.
When running in a cloud provider, ensure both the cloud security group (or firewall) and the instance’s host firewall permit your intended ports. Mismatched rules will block traffic even if one side is correctly configured.
Key reminders
  • Test configuration changes with nginx -t before reloading to prevent downtime.
  • Use systemd (systemctl) to manage the service on modern Linux distributions.
  • Prefer distribution packages for production servers unless you have a clear, documented reason to compile from source.
  • Keep your host firewall and cloud-provider network rules synchronized.
Further reading and references Thanks again for joining this module. Take a break — you’ve earned it. Get a cup of coffee.

Watch Video

Practice Lab