Introduction: Why Do We Need a Reverse Proxy?

Modern web services are rarely operated on a single server. The frontend is typically built with React or Vue and served as static files, while the backend is handled by application servers written in Node.js, Python, Java, or Go. Combined with databases, caches, and message queues, a service becomes a complex system composed of dozens of components.

This is where the Nginx reverse proxy comes in - it orchestrates traffic in front of all these components. Users connect through a single entry point like https://example.com, but Nginx routes traffic to the appropriate backend based on the request path and hostname. This allows SSL termination, load balancing, caching, security, compression, and monitoring to be managed centrally and consistently.

This guide covers everything you need in practice, from the concept of an Nginx reverse proxy to real-world configuration, optimization, and troubleshooting. It includes integration examples for various backends such as Node.js, Python, and Java, handling of special protocols like WebSocket and gRPC, and solutions to common operational issues - all with detailed, real configuration files.

1. Reverse Proxy vs Forward Proxy

1.1 Conceptual Differences

A forward proxy sits on the client side and forwards requests to external servers on behalf of the client. Corporate firewalls, VPNs, and school internet filters are typical examples. The client is aware of the proxy, but the server is not.

A reverse proxy sits on the server side and receives client requests on behalf of the server. The client perceives the proxy as the actual server and is unaware of the real backend servers hidden behind it.

Category Forward Proxy Reverse Proxy
Location Client side Server side
Acts on behalf of Client Server
Primary Use Access control, caching, anonymization Load balancing, SSL termination, caching
Examples Squid, Privoxy Nginx, HAProxy, Traefik

1.2 Key Benefits of a Reverse Proxy

  • Single entry point: Consolidate multiple backend servers under a single domain/IP
  • SSL termination: Centralize HTTPS handling at the proxy; backends can use plain HTTP
  • Load balancing: Distribute traffic across multiple backends
  • Caching: Reduce backend load and improve response times
  • Compression: Handle Gzip/Brotli compression at the proxy
  • Security: Prevent direct exposure of backend servers; enable WAF integration
  • Rate limiting: Manage request limits centrally
  • Centralized logging: Log all requests in one place

2. proxy_pass Basics

2.1 The Most Basic Proxy Configuration

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
    }
}

This alone will give you a working reverse proxy. However, in practice you must set at least the following headers:

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;

        # Forward the original host information
        proxy_set_header Host $host;

        # Real client IP (used by the backend for logging)
        proxy_set_header X-Real-IP $remote_addr;

        # Proxy chain information
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Original protocol (http/https)
        proxy_set_header X-Forwarded-Proto $scheme;

        # Original port
        proxy_set_header X-Forwarded-Port $server_port;

        # Original host
        proxy_set_header X-Forwarded-Host $host;
    }
}
Important: If you don't set proxy_set_header Host $host;, Nginx defaults to passing the host specified in proxy_pass (in this case 127.0.0.1:3000) as the Host header. This causes problems for backends that rely on virtual host routing.

2.2 The Trailing Slash in proxy_pass URL

The presence or absence of a trailing slash (/) at the end of a proxy_pass URL completely changes its behavior. This is one of the most common points of confusion for Nginx beginners:

# Case 1: No trailing slash on proxy_pass
# /api/users -> http://backend/api/users (path passed through as-is)
location /api/ {
    proxy_pass http://backend;
}

# Case 2: Trailing slash on proxy_pass
# /api/users -> http://backend/users (location path is stripped)
location /api/ {
    proxy_pass http://backend/;
}

# Case 3: Path replacement
# /api/users -> http://backend/v1/users (/api/ replaced with /v1/)
location /api/ {
    proxy_pass http://backend/v1/;
}
Remember:
No slash = Path is passed through as-is
Trailing slash = The location path is stripped and only the remainder is forwarded
Failing to understand this clearly leads to 404 errors and incorrect routing.

3. Proxy Configuration by Backend Type

3.1 Node.js (Express, Next.js)

# Node.js app (default port 3000)
upstream nodejs_backend {
    server 127.0.0.1:3000;
    keepalive 32;
}

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://nodejs_backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";    # enable keepalive

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Node.js requires trust proxy setting for req.ip
        # app.set('trust proxy', true);

        proxy_read_timeout 300;
        proxy_connect_timeout 75;
    }
}

Next.js-Specific Configuration

server {
    listen 80;
    server_name next.example.com;

    # Next.js static files (generated at build time)
    location /_next/static/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_cache_valid 200 60m;
        add_header Cache-Control "public, max-age=31536000, immutable";
    }

    # Image optimization API
    location /_next/image {
        proxy_pass http://127.0.0.1:3000;
        proxy_cache_valid 200 60m;
    }

    # All other requests
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

3.2 Python (Django, Flask, FastAPI)

# Django/Flask/FastAPI running under Gunicorn
upstream python_backend {
    server 127.0.0.1:8000 fail_timeout=30s;
}

server {
    listen 80;
    server_name py.example.com;

    # Serve static files directly with Nginx (e.g., Django collectstatic)
    location /static/ {
        alias /var/www/example/static/;
        expires 30d;
        access_log off;
    }

    location /media/ {
        alias /var/www/example/media/;
        expires 7d;
    }

    location / {
        proxy_pass http://python_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Python apps may have long response times
        proxy_read_timeout 120;
        proxy_connect_timeout 75;

        # Django: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
        # Flask: app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1)
    }
}

Using a Unix Socket (Better Performance)

# When Gunicorn is running on a Unix socket
# gunicorn app:app --bind unix:/tmp/gunicorn.sock
upstream python_backend {
    server unix:/tmp/gunicorn.sock;
}

server {
    listen 80;
    server_name py.example.com;

    location / {
        proxy_pass http://python_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

3.3 Java (Spring Boot, Tomcat)

# Spring Boot app (default port 8080)
upstream spring_backend {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name java.example.com;

    client_max_body_size 50M;    # Java apps often have large payloads

    location / {
        proxy_pass http://spring_backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Spring Boot application.properties:
        # server.forward-headers-strategy=native
        # server.tomcat.remote-ip-header=X-Forwarded-For
        # server.tomcat.protocol-header=X-Forwarded-Proto

        proxy_read_timeout 300;
        proxy_buffer_size 16k;
        proxy_buffers 8 16k;
    }
}

3.4 PHP-FPM (A Separate Case)

PHP uses fastcgi_pass instead of proxy_pass. Strictly speaking, this is not a reverse proxy but rather a FastCGI proxy:

server {
    listen 80;
    server_name php.example.com;
    root /var/www/php-app/public;
    index index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

4. WebSocket Proxying

WebSocket uses the Upgrade mechanism of HTTP 1.1. WebSocket will not work with standard HTTP proxy configuration - it requires the following special settings:

# Basic WebSocket proxy configuration
server {
    listen 80;
    server_name ws.example.com;

    location / {
        proxy_pass http://127.0.0.1:8080;

        # Required WebSocket settings
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # WebSocket connections stay open for a long time
        proxy_read_timeout 86400s;    # 24 hours
        proxy_send_timeout 86400s;

        # Disable buffering (real-time bidirectional communication)
        proxy_buffering off;
    }
}

4.1 Handling HTTP and WebSocket Simultaneously

# For cases like Socket.IO that handle both HTTP and WebSocket on the same port
# Define the $connection_upgrade map (in the http block)
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

server {
    listen 80;
    server_name chat.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_read_timeout 86400s;
    }

    # When WebSocket is routed through a separate path
    location /socket.io/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
    }
}

5. Microservices Routing

Consolidating multiple backend services under a single domain is one of the most powerful use cases for a reverse proxy.

5.1 Path-Based Routing

# Define upstreams
upstream frontend { server 127.0.0.1:3000; }
upstream api_service { server 127.0.0.1:4000; }
upstream auth_service { server 127.0.0.1:5000; }
upstream upload_service { server 127.0.0.1:6000; }

server {
    listen 80;
    server_name example.com;

    # Frontend (SPA)
    location / {
        proxy_pass http://frontend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # API service
    location /api/ {
        proxy_pass http://api_service/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Authentication service
    location /auth/ {
        proxy_pass http://auth_service/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # Apply strict rate limiting to the auth service
        limit_req zone=auth_limit burst=5 nodelay;
    }

    # Upload service (handles large files)
    location /upload/ {
        proxy_pass http://upload_service/;
        proxy_set_header Host $host;

        # Allow large request bodies for uploads
        client_max_body_size 500M;

        # Long timeouts
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;

        # Disable buffering (streaming uploads)
        proxy_request_buffering off;
    }
}

5.2 Hostname-Based Routing (Subdomains)

# Route each subdomain to a different service
server {
    listen 80;
    server_name app.example.com;
    location / { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; }
}

server {
    listen 80;
    server_name api.example.com;
    location / { proxy_pass http://127.0.0.1:4000; proxy_set_header Host $host; }
}

server {
    listen 80;
    server_name admin.example.com;
    location / {
        allow 10.0.0.0/8;
        deny all;
        proxy_pass http://127.0.0.1:5000;
        proxy_set_header Host $host;
    }
}

6. Integrating Load Balancing

When you operate multiple backend instances, a reverse proxy can naturally handle load balancing for you.

# Multiple backend instances
upstream api_cluster {
    # Load balancing method (round-robin by default)
    least_conn;

    server 10.0.0.10:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.0.11:8080 weight=2 max_fails=3 fail_timeout=30s;
    server 10.0.0.12:8080 weight=1 max_fails=3 fail_timeout=30s;

    server 10.0.0.13:8080 backup;   # backup server
    server 10.0.0.14:8080 down;     # under maintenance

    keepalive 32;    # number of idle connections kept per worker
}

server {
    listen 80;
    server_name api.example.com;

    location / {
        proxy_pass http://api_cluster;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Failure handling
        proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
        proxy_next_upstream_tries 3;
        proxy_next_upstream_timeout 10s;
    }
}

7. Proxy Caching

Using a reverse proxy's caching feature can significantly reduce backend load.

# Define the cache zone in the http block
http {
    proxy_cache_path /var/cache/nginx/api
                     levels=1:2
                     keys_zone=api_cache:10m
                     max_size=1g
                     inactive=60m
                     use_temp_path=off;

    server {
        listen 80;
        server_name api.example.com;

        location / {
            proxy_pass http://api_backend;
            proxy_set_header Host $host;

            # Enable caching
            proxy_cache api_cache;
            proxy_cache_key "$scheme$request_method$host$request_uri";

            # Cache validity times
            proxy_cache_valid 200 302 10m;   # successful responses: 10 minutes
            proxy_cache_valid 404 1m;        # 404: 1 minute
            proxy_cache_valid any 30s;       # everything else: 30 seconds

            # Cache lock (only call the backend once for concurrent requests)
            proxy_cache_lock on;
            proxy_cache_lock_timeout 5s;

            # Conditions for using stale cache
            proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
            proxy_cache_background_update on;

            # Cache status header (for debugging)
            add_header X-Cache-Status $upstream_cache_status;

            # Cache bypass conditions
            proxy_cache_bypass $cookie_nocache $arg_nocache;
            proxy_no_cache $cookie_nocache $arg_nocache;
        }
    }
}
Checking cache status: You can inspect cache behavior via the X-Cache-Status header value.
HIT - Served from cache
MISS - Not in cache, backend was called
EXPIRED - Cache expired
BYPASS - Cache bypass condition triggered
UPDATING - Cache being refreshed (stale response served)

8. Buffering and Timeout Tuning

8.1 Understanding Buffering

By default, Nginx buffers backend responses. It collects the backend's response in memory (or on disk) before forwarding it to the client. This plays an important role in preventing slow clients from tying up backend connections for too long.

location / {
    proxy_pass http://backend;

    # Response buffering configuration
    proxy_buffering on;              # default
    proxy_buffer_size 4k;            # buffer for response headers
    proxy_buffers 8 4k;              # buffers for response body (count x size)
    proxy_busy_buffers_size 8k;      # buffers currently being sent to the client
    proxy_max_temp_file_size 1024m;  # max size of temporary file
    proxy_temp_file_write_size 8k;

    # Request buffering
    proxy_request_buffering on;      # default
    client_body_buffer_size 16k;
    client_max_body_size 10m;
}

8.2 Streaming Responses (Disabling Buffering)

# Server-Sent Events (SSE), streaming APIs, large file downloads
location /stream/ {
    proxy_pass http://backend;

    # Disable buffering
    proxy_buffering off;
    proxy_request_buffering off;

    # Enable chunked transfer
    proxy_http_version 1.1;
    proxy_set_header Connection "";

    # Forward immediately
    proxy_cache off;

    # Long timeouts
    proxy_read_timeout 24h;
}

8.3 Timeout Settings

location / {
    proxy_pass http://backend;

    # Backend connection timeout (TCP handshake)
    proxy_connect_timeout 60s;

    # Timeout for sending a request to the backend
    proxy_send_timeout 60s;

    # Timeout for waiting on the backend response
    proxy_read_timeout 60s;
}

9. Security Headers and Client Protection

server {
    listen 443 ssl http2;
    server_name example.com;

    # Security headers (applied to all locations)
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Frame-Options "DENY" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Content-Security-Policy "default-src 'self'" always;
    add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Hide sensitive headers returned by the backend
        proxy_hide_header X-Powered-By;
        proxy_hide_header X-AspNet-Version;
        proxy_hide_header Server;
    }
}

9.1 Rate Limiting and Connection Limits

# Define zones in the http block
http {
    # Requests per second per IP
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
    limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;

    # Concurrent connections per IP
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

    server {
        location / {
            limit_req zone=general burst=20 nodelay;
            limit_conn conn_limit 10;
            proxy_pass http://backend;
        }

        location /login {
            limit_req zone=login burst=3 nodelay;
            proxy_pass http://auth_backend;
        }

        location /api/ {
            limit_req zone=api burst=50 nodelay;
            proxy_pass http://api_backend;
        }
    }
}

10. Real-World Troubleshooting

10.1 504 Gateway Timeout

Cause: The backend response exceeded proxy_read_timeout (default 60 seconds)
Solution:

location / {
    proxy_pass http://backend;
    proxy_connect_timeout 75s;
    proxy_send_timeout 300s;
    proxy_read_timeout 300s;   # increase as needed
}

10.2 502 Bad Gateway

Cause: The backend server is down or unreachable
Solution:

# Check backend process
ps aux | grep node    # or python, java, etc.

# Check port
ss -tlnp | grep 3000

# Check Nginx error log
tail -f /var/log/nginx/error.log | grep upstream

# Test the backend directly
curl -v http://127.0.0.1:3000/

10.3 413 Request Entity Too Large

Cause: Upload file size exceeds client_max_body_size (default 1MB)
Solution:

# Global setting (http block)
client_max_body_size 100M;

# Or only for a specific location
location /upload/ {
    client_max_body_size 500M;
    proxy_pass http://upload_backend;
}

10.4 Backend App Redirects to HTTP

Symptom: You accessed via HTTPS, but after a backend redirect the URL drops to HTTP
Cause: The backend does not know the original scheme
Solution:

location / {
    proxy_pass http://backend;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Host $host;

    # Or rewrite redirects at the Nginx level
    proxy_redirect http:// https://;
}
Per-framework settings:
Express: app.set('trust proxy', true)
Django: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
Flask: ProxyFix(app.wsgi_app, x_proto=1)
Spring Boot: server.forward-headers-strategy=native

10.5 CORS Preflight Issues

location /api/ {
    # Handle CORS preflight requests
    if ($request_method = 'OPTIONS') {
        add_header 'Access-Control-Allow-Origin' '$http_origin' always;
        add_header 'Access-Control-Allow-Credentials' 'true' always;
        add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
        add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, Accept' always;
        add_header 'Access-Control-Max-Age' 1728000 always;
        add_header 'Content-Type' 'text/plain; charset=utf-8' always;
        add_header 'Content-Length' 0 always;
        return 204;
    }

    # Actual request
    add_header 'Access-Control-Allow-Origin' '$http_origin' always;
    add_header 'Access-Control-Allow-Credentials' 'true' always;

    proxy_pass http://api_backend;
    proxy_set_header Host $host;
}

11. Reverse Proxy Operations Checklist

  • Proxy header forwarding: Configure Host, X-Real-IP, X-Forwarded-For, X-Forwarded-Proto
  • Backend framework configuration: Enable trust proxy settings so the original IP/protocol is recognized
  • Appropriate timeouts: Tune read/connect/send timeouts to match your service characteristics
  • Upload size limits: Set client_max_body_size to a suitable value
  • WebSocket support: Configure Upgrade headers when needed
  • Load balancer failure handling: Use max_fails, fail_timeout, and backup servers
  • Security headers: Apply HSTS, X-Frame-Options, and more
  • Rate limiting: Apply limits on sensitive paths (login, API)
  • Log separation: Split proxy logs and error logs per service
  • Monitoring: Collect upstream_response_time logs
  • Backend health checks: Run periodic health-check scripts
  • Configuration version control: Manage config files with Git

Conclusion: Reverse Proxy Is a Core Skill of Modern Web Infrastructure

The Nginx reverse proxy is far more than a simple "request forwarder" - it is a core component of modern web infrastructure. Almost every operational concern, including SSL termination, load balancing, caching, security, and monitoring, can be centralized in one place, allowing your backend applications to focus purely on business logic.

Key takeaways from this guide:

  • proxy_set_header is essential - Always set Host, X-Real-IP, and X-Forwarded-* headers.
  • The trailing slash in proxy_pass - Path behavior changes completely, so make sure you understand it precisely.
  • Tune for each backend's characteristics - Node.js benefits from keepalive, Python from Unix sockets, Java from larger buffers.
  • WebSocket requires the Upgrade header - It will not work with a standard HTTP proxy configuration.
  • Set timeouts carefully - Too short and legitimate requests fail; too long and you waste resources.
  • Use caching to ease backend load - Most GET requests are good candidates for caching.
  • Consider security alongside performance - The proxy is your security boundary, so always configure rate limiting and security headers.

Adapt the configuration examples in this guide to fit your own service environment and put them into practice. Reverse proxy configuration becomes more refined the more real-world experience you accumulate, allowing you to build stable and scalable web infrastructure.