Nginx Configuration Complete Guide - From Installation to Production
Everything You Need to Know to Master Nginx in Real-World Environments
Introduction: Why Nginx?
Nginx (pronounced "engine-x") is a web server used by approximately 34% of all websites worldwide, standing alongside Apache as one of the two dominant web servers. Designed in 2004 by Russian developer Igor Sysoev to solve the C10K problem (handling 10,000 simultaneous connections), Nginx adopts an event-driven, asynchronous architecture that can handle a massive number of concurrent connections with minimal memory usage.
In production environments, Nginx goes far beyond being a simple web server, serving as a reverse proxy, load balancer, API gateway, cache server, and more. Its importance has only grown with the widespread adoption of container environments and microservices architecture.
This guide systematically covers everything from installation to production-ready configurations for Nginx. From basic setups that beginners can follow along with, to advanced configurations that are essential for production use, we provide detailed explanations with real configuration file examples.
1. Installing Nginx
1.1 Installation on Ubuntu/Debian
# Update package list and install
sudo apt update
sudo apt install nginx -y
# Check version
nginx -v
# Start service and enable auto-start
sudo systemctl start nginx
sudo systemctl enable nginx
# Check status
sudo systemctl status nginx
1.2 Installation on RHEL/CentOS/Rocky Linux
# Add EPEL repository (CentOS 7)
sudo yum install epel-release -y
sudo yum install nginx -y
# Rocky Linux 9 / RHEL 9
sudo dnf install nginx -y
# Start service and enable auto-start
sudo systemctl start nginx
sudo systemctl enable nginx
# Allow through firewall
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
1.3 Installing the Latest Version from the Official Nginx Repository
The Nginx version in distribution default repositories is often outdated. If you need the latest features, add the official repository:
# Ubuntu - Add Nginx official repository
sudo apt install curl gnupg2 ca-certificates lsb-release -y
# Add signing key
curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo gpg --dearmor -o /usr/share/keyrings/nginx-archive-keyring.gpg
# Add repository (stable)
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu $(lsb_release -cs) nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
# Install
sudo apt update
sudo apt install nginx -y
mainline (latest features, odd version numbers) and stable (stable release, even version numbers). For production environments, stable is recommended.
1.4 Verifying the Installation
# Verify in browser: http://SERVER_IP
# Or verify via command line
curl -I http://localhost
# Check installation path
nginx -V # Check compile options and modules
nginx -t # Validate configuration syntax
2. Nginx Directory Structure and Configuration Files
2.1 Core Directory Structure
| Path | Description |
|---|---|
/etc/nginx/ |
Main configuration directory |
/etc/nginx/nginx.conf |
Main configuration file (entry point for all settings) |
/etc/nginx/conf.d/ |
Additional configuration files (*.conf auto-loaded) |
/etc/nginx/sites-available/ |
Available site configurations (Ubuntu) |
/etc/nginx/sites-enabled/ |
Enabled site symlinks (Ubuntu) |
/etc/nginx/mime.types |
MIME type mappings |
/var/log/nginx/access.log |
Access log |
/var/log/nginx/error.log |
Error log |
/var/www/html/ |
Default web root directory |
/usr/share/nginx/html/ |
Default web root (RHEL-based) |
2.2 Basic Structure of nginx.conf
Nginx configuration is organized in a block structure with hierarchical nesting:
# === Main Context ===
user nginx; # Worker process user
worker_processes auto; # Number of worker processes (auto = CPU cores)
error_log /var/log/nginx/error.log warn; # Error log path and level
pid /run/nginx.pid; # PID file path
# === Events Context ===
events {
worker_connections 1024; # Max concurrent connections per worker
use epoll; # Event processing method (Linux)
multi_accept on; # Accept multiple connections at once
}
# === HTTP Context ===
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Log format definition
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log main;
sendfile on; # Kernel-level file transfer
tcp_nopush on; # Used with sendfile
tcp_nodelay on; # Prevent small packet delay
keepalive_timeout 65; # Connection keep-alive time
# gzip compression
gzip on;
gzip_types text/plain text/css application/json application/javascript;
# Load additional configuration files
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*; # Ubuntu style
}
nginx -t and apply changes with sudo systemctl reload nginx. Using reload instead of restart applies the configuration without service interruption.
3. Server Block (Virtual Host) Configuration
Server Blocks are the Nginx equivalent of Apache's VirtualHost, allowing you to run multiple domains (sites) on a single Nginx server.
3.1 Basic Static Website
# /etc/nginx/conf.d/example.com.conf
server {
listen 80; # IPv4 port
listen [::]:80; # IPv6 port
server_name example.com www.example.com;
root /var/www/example.com/html; # Document root
index index.html index.htm; # Default index files
# Default location
location / {
try_files $uri $uri/ =404; # File -> Directory -> 404
}
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# Log configuration (per-site separation)
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
}
# Create web root directory
sudo mkdir -p /var/www/example.com/html
sudo chown -R $USER:$USER /var/www/example.com/html
# Create test page
echo "<h1>Welcome to example.com</h1>" > /var/www/example.com/html/index.html
# Validate and apply configuration
sudo nginx -t
sudo systemctl reload nginx
3.2 Ubuntu's sites-available / sites-enabled Pattern
# Create configuration file
sudo nano /etc/nginx/sites-available/example.com
# Activate with symlink
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
# Disable default site (if needed)
sudo rm /etc/nginx/sites-enabled/default
# Apply
sudo nginx -t
sudo systemctl reload nginx
3.3 Running Multiple Domains (Multi-Site)
# Site A: /etc/nginx/conf.d/site-a.conf
server {
listen 80;
server_name site-a.com www.site-a.com;
root /var/www/site-a/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
# Site B: /etc/nginx/conf.d/site-b.conf
server {
listen 80;
server_name site-b.com www.site-b.com;
root /var/www/site-b/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
# Default server (handles unmatched requests)
server {
listen 80 default_server;
server_name _;
return 444; # Close connection (no response)
}
4. Location Block - The Core of URL Matching
The location block is the most frequently used and most important directive in Nginx configuration. It allows different processing based on URL paths.
4.1 Matching Methods and Priority
| Syntax | Matching Method | Priority |
|---|---|---|
= /path |
Exact Match | 1st (Highest) |
^~ /path |
Prefix match (skips regex search) | 2nd |
~ /regex |
Regular expression (case-sensitive) | 3rd |
~* /regex |
Regular expression (case-insensitive) | 3rd |
/path |
Prefix match (longest match) | 4th (Default) |
4.2 Practical Location Examples
server {
listen 80;
server_name example.com;
root /var/www/example.com;
# Match only the exact root path
location = / {
# Main page specific handling
try_files /index.html =404;
}
# Paths starting with /api/ -> Backend proxy
location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Static files (images, CSS, JS)
location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2?)$ {
expires 30d; # 30-day cache
add_header Cache-Control "public, immutable";
access_log off; # Disable logging for static files
}
# PHP file handling
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Block access to hidden files (.htaccess, .git, etc.)
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Prevent favicon.ico 404 logging
location = /favicon.ico {
log_not_found off;
access_log off;
}
# robots.txt
location = /robots.txt {
log_not_found off;
access_log off;
}
}
= (exact) -> ^~ (preferential prefix) -> ~/~* (regex) -> normal prefix. Without a clear understanding of this, you may encounter unexpected routing issues.
5. Reverse Proxy Configuration
Reverse proxy is one of Nginx's most powerful features. It forwards client requests to backend servers (Node.js, Python, Java, etc.) and returns the responses.
5.1 Basic Reverse Proxy
# Node.js app proxy (port 3000)
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
# Essential header forwarding
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeout settings
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
}
5.2 WebSocket Proxy
# When WebSocket support is needed (chat, real-time apps)
server {
listen 80;
server_name ws.example.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
# WebSocket upgrade headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# WebSocket connection keep-alive time
proxy_read_timeout 86400s; # 24 hours
proxy_send_timeout 86400s;
}
}
5.3 Multi-Backend Proxy by Path
# Microservice routing
server {
listen 80;
server_name example.com;
# Frontend (React/Vue SPA)
location / {
root /var/www/frontend/dist;
try_files $uri $uri/ /index.html; # SPA routing
}
# API server (Node.js)
location /api/ {
proxy_pass http://127.0.0.1:3000/; # Trailing / is important!
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Auth server (Python)
location /auth/ {
proxy_pass http://127.0.0.1:5000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# File upload server (separate size limit)
location /upload/ {
client_max_body_size 100M; # Max upload 100MB
proxy_pass http://127.0.0.1:4000/;
proxy_set_header Host $host;
}
}
/) in the proxy_pass URL. proxy_pass http://backend/; strips the location path before forwarding, while proxy_pass http://backend; (no trailing slash) forwards with the location path included.
6. Load Balancing
Distribute traffic across multiple backend servers to achieve high availability and performance.
6.1 Load Balancing Methods
# === Round Robin (Default) ===
# Distributes requests sequentially in rotation
upstream backend_rr {
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
}
# === Weighted Round Robin ===
# Assign weights based on server capacity
upstream backend_weighted {
server 192.168.1.10:8080 weight=5; # 5x more requests
server 192.168.1.11:8080 weight=3;
server 192.168.1.12:8080 weight=1;
}
# === Least Connections ===
# Routes to the server with the fewest connections
upstream backend_lc {
least_conn;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
}
# === IP Hash ===
# Same client IP always goes to the same server (session persistence)
upstream backend_hash {
ip_hash;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
}
# Apply load balancer
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend_rr;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
6.2 Health Checks and Failover
upstream backend {
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 backup; # Backup server (used only when all others fail)
server 192.168.1.13:8080 down; # Manually disabled
# max_fails=3: Deactivate server after 3 failures
# fail_timeout=30s: Retry after 30 seconds
# backup: Used only when all other servers are down
# down: Permanently disabled (for maintenance)
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_tries 3; # Try up to 3 different servers
proxy_next_upstream_timeout 10s; # Timeout
}
}
7. HTTPS / SSL Configuration
7.1 Issuing a Free Let's Encrypt Certificate
# Install Certbot
sudo apt install certbot python3-certbot-nginx -y # Ubuntu
sudo dnf install certbot python3-certbot-nginx -y # Rocky/RHEL
# Issue certificate (automatic Nginx configuration)
sudo certbot --nginx -d example.com -d www.example.com
# Test automatic renewal
sudo certbot renew --dry-run
# Verify auto-renewal cron
sudo systemctl status certbot.timer
7.2 Manual SSL Configuration (with Optimization)
# /etc/nginx/conf.d/example.com.conf
server {
listen 80;
server_name example.com www.example.com;
# HTTP -> HTTPS redirect
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
# SSL certificate paths
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# SSL protocols (allow TLS 1.2+ only)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# SSL session cache (improves handshake performance)
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# OCSP Stapling (improves certificate verification speed)
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
# Security headers
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
root /var/www/example.com/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
8. Security Configuration
8.1 Basic Security Hardening
# /etc/nginx/conf.d/security.conf (common security settings)
# Hide Nginx version information
server_tokens off;
# Request size limit (default 1MB)
client_max_body_size 10M;
# Request header/body timeouts
client_header_timeout 10s;
client_body_timeout 10s;
send_timeout 10s;
# Buffer overflow attack prevention
client_body_buffer_size 1K;
client_header_buffer_size 1k;
large_client_header_buffers 2 1k;
8.2 IP-Based Access Control
# IP restriction for admin pages
location /admin/ {
allow 10.0.0.0/8; # Allow internal network
allow 192.168.1.100; # Allow specific IP
deny all; # Block all others
proxy_pass http://127.0.0.1:8080;
}
# Block specific country/IP ranges
# Using the geo module
geo $blocked_ip {
default 0;
1.2.3.0/24 1; # IP range to block
5.6.7.0/24 1;
}
server {
if ($blocked_ip) {
return 403;
}
}
8.3 Rate Limiting
# Define zone in the http block
http {
# Limit to 10 requests per second per IP
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
# Limit concurrent connections per IP
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
}
# Apply in the server block
server {
# Apply rate limit to API
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
# burst=20: Allow up to 20 burst requests
# nodelay: Process excess immediately (no queuing)
limit_req_status 429; # Return 429 when limit exceeded
proxy_pass http://backend;
}
# Login page (stricter limit)
location /login {
limit_req zone=api_limit burst=5;
limit_conn conn_limit 5; # Limit to 5 concurrent connections
proxy_pass http://backend;
}
}
8.4 Blocking Bots and Scrapers
# Block malicious User-Agents
if ($http_user_agent ~* (scrapy|curl|wget|python-requests|httpclient|Go-http-client)) {
return 403;
}
# Block empty User-Agent
if ($http_user_agent = "") {
return 403;
}
# Block specific referrers (hotlink protection)
location ~* \.(jpg|jpeg|png|gif|webp)$ {
valid_referers none blocked server_names *.example.com;
if ($invalid_referer) {
return 403;
}
}
9. Performance Optimization
9.1 Gzip Compression
http {
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6; # Compression level (1-9, 6 recommended)
gzip_min_length 256; # Only compress responses over 256 bytes
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml
application/rss+xml
application/atom+xml
image/svg+xml
font/woff
font/woff2;
}
9.2 Static File Caching
# Browser caching for static resources
location ~* \.(jpg|jpeg|png|gif|ico|webp|avif)$ {
expires 365d;
add_header Cache-Control "public, immutable";
access_log off;
}
location ~* \.(css|js)$ {
expires 30d;
add_header Cache-Control "public";
access_log off;
}
location ~* \.(woff|woff2|ttf|otf|eot)$ {
expires 365d;
add_header Cache-Control "public, immutable";
access_log off;
add_header Access-Control-Allow-Origin "*"; # CORS (fonts)
}
location ~* \.(html|htm)$ {
expires 1h;
add_header Cache-Control "public, must-revalidate";
}
9.3 Proxy Caching
http {
# Define proxy cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m
max_size=1g inactive=60m use_temp_path=off;
server {
location / {
proxy_pass http://backend;
# Enable caching
proxy_cache my_cache;
proxy_cache_valid 200 302 10m; # 200, 302 -> 10 min cache
proxy_cache_valid 404 1m; # 404 -> 1 min cache
# Cache status header (for debugging)
add_header X-Cache-Status $upstream_cache_status;
# Cache bypass conditions
proxy_cache_bypass $http_cache_control;
proxy_no_cache $http_pragma;
}
}
}
9.4 Worker Process Tuning
# Set according to CPU core count
worker_processes auto; # Auto-detect (recommended)
events {
worker_connections 4096; # Adjust based on server scale
use epoll; # Optimal event model for Linux
multi_accept on;
}
http {
# File descriptor caching
open_file_cache max=10000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
# OS-level tuning (kernel parameters)
# Add to /etc/sysctl.conf
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
# File descriptor limit for worker processes
# /etc/security/limits.conf
# nginx soft nofile 65535
# nginx hard nofile 65535
# Or in the main context of nginx.conf:
worker_rlimit_nofile 65535;
10. Log Configuration and Monitoring
10.1 Custom Log Formats
http {
# Detailed log format (including response time)
log_format detailed '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
# JSON log format (useful for ELK/Loki integration)
log_format json_log escape=json '{'
'"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"request":"$request",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_response_time":"$upstream_response_time",'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent"'
'}';
# Conditional logging (skip logging for 2xx responses)
map $status $loggable {
~^[23] 0;
default 1;
}
server {
access_log /var/log/nginx/access.log detailed;
# access_log /var/log/nginx/access.json json_log if=$loggable;
}
}
10.2 Log Rotation
# /etc/logrotate.d/nginx
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 $(cat /var/run/nginx.pid)
endscript
}
10.3 Nginx Status Monitoring (stub_status)
# Status page accessible only internally
server {
listen 8080;
server_name localhost;
location /nginx_status {
stub_status on;
allow 127.0.0.1;
allow 10.0.0.0/8;
deny all;
}
}
# Example output:
# Active connections: 291
# server accepts handled requests
# 16630948 16630948 31070465
# Reading: 6 Writing: 179 Waiting: 106
# Check status
curl http://localhost:8080/nginx_status
# For Prometheus integration, use nginx-prometheus-exporter
# docker run -p 9113:9113 nginx/nginx-prometheus-exporter -nginx.scrape-uri=http://host:8080/nginx_status
11. Production Operation Tips
11.1 Frequently Used Commands
# Validate configuration syntax (always run after modifications!)
sudo nginx -t
# Apply configuration without service interruption
sudo systemctl reload nginx
# Restart service (causes downtime)
sudo systemctl restart nginx
# Real-time log monitoring
tail -f /var/log/nginx/access.log
tail -f /var/log/nginx/error.log
# Filter by specific status code
tail -f /var/log/nginx/access.log | grep --line-buffered '" 500 '
# Check connection status
ss -tlnp | grep nginx
# Current active connection count
ss -s | head -5
# Check Nginx master/worker processes
ps aux | grep nginx
# Check configuration file paths and modules
nginx -V 2>&1 | tr ' ' '\n' | grep -E "^--"
11.2 Zero-Downtime Configuration Change Procedure
# 1. Edit configuration file
sudo nano /etc/nginx/conf.d/example.com.conf
# 2. Validate syntax (mandatory!)
sudo nginx -t
# nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
# nginx: configuration file /etc/nginx/nginx.conf test is successful
# 3. Apply with reload (no downtime)
sudo systemctl reload nginx
# If there is a syntax error:
# nginx: [emerg] unknown directive "servr" in /etc/nginx/conf.d/example.com.conf:2
# nginx: configuration file /etc/nginx/nginx.conf test failed
# -> Do NOT reload; fix the error first!
11.3 Common Mistakes and Solutions
| Symptom | Cause | Solution |
|---|---|---|
| 502 Bad Gateway | Backend server down or socket error | Check backend process, verify proxy_pass address |
| 504 Gateway Timeout | Backend response delay | Increase proxy_read_timeout value |
| 413 Request Entity Too Large | Upload size exceeded | Increase client_max_body_size value |
| 403 Forbidden | File permissions or SELinux | Check file owner/permissions, verify SELinux context |
| 301 Redirect Loop | HTTP/HTTPS configuration conflict | Separate server blocks, check $scheme condition |
| Real-time logs not appearing | access_log off; or buffering |
Check log settings in the location block |
12. Configuration File Checklist
Before deploying Nginx to a production server, make sure to verify the following items:
- Basic Security: Is
server_tokens off;configured? - HTTPS: Is all HTTP traffic redirected to HTTPS?
- SSL Protocol: Are only TLS 1.2 and above allowed?
- Security Headers: Are HSTS, X-Frame-Options, and X-Content-Type-Options configured?
- File Access: Are hidden files like
.gitand.envblocked? - Upload Limit: Is
client_max_body_sizeset to an appropriate value? - Rate Limiting: Are limits configured for sensitive paths like APIs and login?
- Gzip Compression: Is compression enabled for text-based responses?
- Caching: Are proper
expiressettings configured for static files? - Logging: Are per-site logs separated with rotation configured?
- Default Server: Are unmatched requests handled (
return 444;)? - Timeouts: Are appropriate timeouts like
proxy_read_timeoutconfigured? - Backup: Are configuration files backed up and version controlled (Git recommended)?
Conclusion: With Nginx, Configuration Is Expertise
Nginx is easy to install but requires a deep understanding of configuration to operate properly. The content covered in this guide is based on the most commonly encountered scenarios in production environments.
Here is a summary of the key takeaways:
- Master the fundamentals - Understand the block structure and inheritance hierarchy of
nginx.conf. - Memorize the location priority -
=->^~->~/~*-> normal prefix. Knowing this alone will resolve most routing issues. - Always run
nginx -tbefore changes - Configuration errors directly lead to service outages. Syntax validation is not optional; it is mandatory. - reload != restart -
reloadapplies changes without downtime, whilerestartcauses service interruption. - Security from the start - Configure HTTPS, security headers, and rate limiting before going live.
- Logs are assets - Setting up custom log formats and analyzing them regularly provides tremendous help in preventing incidents and improving performance.
Try adapting the configuration examples in this guide to suit your own environment. As you gain hands-on experience, Nginx configuration will become increasingly natural, and you will be able to confidently architect even complex setups.